The computer revolution can beusefully divided into three stages, two ofwhich have already occurred: the introductionstage and the permeation stage. We have onlyrecently entered the third and most importantstage – the power stage – in which many ofthe most serious social, political, legal, andethical questions involving informationtechnology will present themselves on a largescale. The present article discusses severalreasons to believe that future developments ininformation technology will make computerethics more vibrant and more important thanever. Computer ethics is here to (...) stay! (shrink)
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond (...) to a prominent argument against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computer science. (shrink)
Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computer science and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For instance, (...) should the mind be modeled by digital computers, or by parallel-processing systems more like brains? Do computer programs consist of meaningless patterns, or do they embody (and explain) genuine meaning? (shrink)
Morrison points out many similarities between the roles of simulation models and other sorts of models in science. On the basis of these similarities she claims that running a simulation is epistemologically on a par with doing a traditional experiment and that the output of a simulation therefore counts as a measurement. I agree with her premises but reject the inference. The epistemological payoff of a traditional experiment is greater (or less) confidence in the fit between a model and a (...) target system. The source of this payoff is the existence of a causal interaction with the target system. A computer experiment, which does not go beyond the simulation system itself, lacks any such interaction. So computer experiments cannot confer any additional confidence in the fit (or lack thereof) between the simulation model and the target system. (shrink)
This article discusses some``historical milestones'' in computer ethics, aswell as two alternative visions of the futureof computer ethics. Topics include theimpressive foundation for computer ethics laiddown by Norbert Wiener in the 1940s and early1950s; the pioneering efforts of Donn Parker,Joseph Weizenbaum and Walter Maner in the1970s; Krystyna Gorniak's hypothesis thatcomputer ethics will evolve into ``globalethics''; and Deborah Johnson's speculation thatcomputer ethics may someday ``disappear''.
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of (...) detection procedures. I argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J (...) Artif Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
Many philosophical and public discussions of the ethical aspects of violent computer games typically centre on the relation between playing violent videogames and its supposed direct consequences on violent behaviour. But such an approach rests on a controversial empirical claim, is often one-sided in the range of moral theories used, and remains on a general level with its focus on content alone. In response to these problems, I pick up Matt McCormick’s thesis that potential harm from playing computer (...) games is best construed as harm to one’s character, and propose to redirect our attention to the question how violent computer games influence the moral character of players. Inspired by the work of Martha Nussbaum, I sketch a positive account of how computer games can stimulate an empathetic and cosmopolitan moral development. Moreover, rather than making a general argument applicable to a wide spectrum of media, my concern is with specific features of violent computer games that make them especially morally problematic in terms of empathy and cosmopolitanism, features that have to do with the connections between content and medium, and between virtuality and reality. I also discuss some remaining problems. In this way I hope contribute to a less polarised discussion about computer games that does justice to the complexity of their moral dimension, and to offer an account that is helpful to designers, parents, and other stakeholders. (shrink)
We characterize abstraction in computer science by first comparing the fundamental nature of computer science with that of its cousin mathematics. We consider their primary products, use of formalism, and abstraction objectives, and find that the two disciplines are sharply distinguished. Mathematics, being primarily concerned with developing inference structures, has information neglect as its abstraction objective. Computer science, being primarily concerned with developing interaction patterns, has information hiding as its abstraction objective. We show that abstraction through information (...) hiding is a primary factor in computer science progress and success through an examination of the ubiquitous role of information hiding in programming languages, operating systems, network architecture, and design patterns. (shrink)
The essays included in the special issue dedicated to the philosophy of computer science examine new philosophical questions that arise from reflection upon conceptual issues in computer science and the insights such an enquiry provides into ongoing philosophical debates.
Changes in information technologylead to new topics and new emphases in computerethics. The present article examines a varietyof such issues, and argues that computer ethicsmust become more rigorous and develop astronger theoretical base. The articleconcludes with a discussion of ways to makecomputer ethics more effective in bringinghelpful changes to the world.
This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic: computers are used as information-processing and problem-solving tools that extend human cognition, thereby creating hybrid cognitive systems consisting of a human processor and an artificial processor that process information in tandem. In this role, computer systems (...) extend human cognition. Next, it is argued that in recent years, the epistemic relation between humans and computers has been supplemented by an ontic relation. Current computer systems are able to simulate virtual and social environments that extend the interactive possibilities found in the physical environment. This type of relationship is primarily ontic, and extends to objects and places that have a virtual ontology. Increasingly, computers are not just information devices, but portals to worlds that we inhabit. The aforementioned epistemic and ontic relationships are unique to information technology and distinguish human-computer relationships from other human-technology relationships. (shrink)
Do computers have beliefs? I argue that anyone who answers in the affirmative holds a view that is incompatible with what I shall call the commonsense approach to the propositional attitudes. My claims shall be two. First,the commonsense view places important constraints on what can be acknowledged as a case of having a belief. Second, computers – at least those for which having a belief would be conceived as having a sentence in a belief box – fail to satisfy some (...) of these constraints. This second claim can best be brought out in the context of an examination of the idea of computer self-knowledge and self-deception, but the conclusion is perfectly general: the idea that computers are believers, like the idea that computers could have self-knowledge or be self-deceived, is incompatible with the commonsense view. The significance of the argument lies in the choice it forces on us: whether to revise our notion of belief so as to accommodate the claim that computers are believers, or to give up on that claim so as to preserve our pretheoretic notion of the attitudes. We cannot have it both ways. (shrink)
Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. (...) In particular, it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities. (shrink)
This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over thousand (...) articles published in leading journals and conference proceedings in the C&IE field was constructed. With the help of various computer algorithms, key terms were identified in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the co-occurrence frequencies, the term map was constructed. This was done using a computer program called VOSviewer. The term map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three main concepts, namely privacy, ethics, and the Internet. (shrink)
Issue Title: Moral Luck, Social Networking Sites, and Trust on the Web I argue that the problem of 'moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its 'logical malleability', leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down (...) most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.[PUBLICATION ABSTRACT]. (shrink)
In this paper I attempt to cast the current program verification debate within a more general perspective on the methodologies and goals of computer science. I show, first, how any method involved in demonstrating the correctness of a physically executing computer program, whether by testing or formal verification, involves reasoning that is defeasible in nature. Then, through a delineation of the senses in which programs can be run as tests, I show that the activities of testing and formal (...) verification do not necessarily share the same goals and thus do not always constitute alternatives. The testing of a program is not always intended to demonstrate a program's correctness. Testing may seek to accept or reject nonprograms including algorithms, specifications, and hypotheses regarding phenomena. The relationship between these kinds of testing and formal verification is couched in a more fundamental relationship between two views of computer science, one properly containing the other. (shrink)
Brain-Computer Interface (BCI) research and (future) applications raise important ethical issues that need to be addressed to promote societal acceptance and adequate policies. Here we report on a survey we conducted among 145 BCI researchers at the 4th International BCI conference, which took place in May–June 2010 in Asilomar, California. We assessed respondents’ opinions about a number of topics. First, we investigated preferences for terminology and definitions relating to BCIs. Second, we assessed respondents’ expectations on the marketability of different (...) BCI applications (BCIs for healthy people, BCIs for assistive technology, BCIs-controlled neuroprostheses and BCIs as therapy tools). Third, we investigated opinions about ethical issues related to BCI research for the development of assistive technology: informed consent process with locked-in patients, risk-benefit analyses, team responsibility, consequences of BCI on patients’ and families’ lives, liability and personal identity and interaction with the media. Finally, we asked respondents which issues are urgent in BCI research. (shrink)
Computer simulations can be useful tools to support philosophers in validating their theories, especially when these theories concern phenomena showing nontrivial dynamics. Such theories are usually informal, whilst for computer simulation a formally described model is needed. In this paper, a methodology is proposed to gradually formalise philosophical theories in terms of logically formalised dynamic properties. One outcome of this process is an executable logic-based temporal specification, which within a dedicated software environment can be used as a simulation (...) model to perform simulations. This specification provides a logical formalisation at the lowest aggregation level of the basic mechanisms underlying a process. In addition, dynamic properties at a higher aggregation level that may emerge from the mechanisms specified by the lower level properties, can be specified. Software tools are available to support specification, and to automatically check such higher level properties against the lower level properties and against generated simulation traces. As an illustration, three case studies are discussed showing successful applications of the approach to formalise and analyse, among others, Clark’s theory on extended mind, Damasio’s theory on core consciousness, and Dennett’s perspective on intertemporal decision making and altruism. (shrink)
Computer science is an engineering science whose objective is to determine how to best control interactions among computational objects. We argue that it is a fundamental computer science value to design computational objects so that the dependencies required by their interactions do not result in couplings, since coupling inhibits change. The nature of knowledge in any science is revealed by how concepts in that science change through paradigm shifts, so we analyze classic paradigm shifts in both natural and (...)computer science in terms of decoupling. We show that decoupling pervades computer science both at its core and in the wider context of computing at large, and lies at the very heart of computer science’s value system. (shrink)
This paper identifies two conceptions of security in contemporary concerns over the vulnerability of computers and networks to hostile attack. One is derived from individual-focused conceptions of computer security developed in computer science and engineering. The other is informed by the concerns of national security agencies of government as well as those of corporate intellectual property owners. A comparative evaluation of these two conceptions utilizes the theoretical construct of “securitization,”developed by the Copenhagen School of International Relations.
The present article focusesupon three aspects of computer ethics as aphilosophical field: contemporary perspectives,future projections, and current resources.Several topics are covered, including variouscomputer ethics methodologies, the `uniqueness'of computer ethics questions, and speculationsabout the impact of globalization and theinternet. Also examined is the suggestion thatcomputer ethics may `disappear' in the future.Finally, there is a brief description ofcomputer ethics resources, such as journals,textbooks, conferences and associations.
Brain Computer Interface (BCI) technology offers potential for human augmentation in areas ranging from communication to home automation, leisure and gaming. This paper addresses ethical challenges associated with the wider scale deployment of BCI as an assistive technology by documenting issues associated with the development of non-invasive BCI technology. Laboratory testing is normally carried out with volunteers but further testing with subjects, who may be in vulnerable groups is often needed to improve system operation. BCI development is technically complex, (...) sometimes requiring lengthy recording sessions to achieve the necessary personalisation of the paradigms, and this can present ethical challenges that vary depending on the subject group. The paper contributes to the on-going ethical discussion surrounding the deployment BCI outside the specialist laboratory and suggests some tentative guidelines for BCI research teams, appropriate to those deploying the technology, derived from experience on a multisite project. Any tension between deployment and technical progress must be managed by a formal process within a multidisciplinary consortium. (shrink)
Advertisers often use computers to create fantastic images. Generally, these are perfectly harmless images that are used for comic or dramatic effect. Sometimes, however, they are problematic human images that I call computer-generated images of perfection. Advertisers create these images by using computer technology to remove unwanted traits from models or to generate entire human bodies. They are images that portray ideal human beauty, bodies, or looks. In this paper, I argue that the use of such images is (...) unethical. I begin by explaining the common objections against advertising and by demonstrating how critics might argue that those objections apply to computer-generated images of perfection. Along the way, I demonstrate an ethically significant difference between computer-generated images of perfection and the images in ordinary ads. I argue that although critics might use this fact to apply the common objections to the use of computer-generated images of perfection, the objections fail. Finally, I argue that despite surviving the common objections, the use of computer-generated images of perfection is subject to an ethical objection that is based on aesthetic considerations. Advertisers are ethically obligated to avoid certain aesthetic results that are produced by computer-generated images of perfection. (shrink)
Standard agent and action-based approaches in computer ethics tend to have difficulty dealing with complex systems-level issues such as the digital divide and globalisation. This paper argues for a value-based agenda to complement traditional approaches in computer ethics, and that one value-based approach well-suited to technological domains can be found in capability theory. Capability approaches have recently become influential in a number of fields with an ethical or policy dimension, but have not so far been applied in (...) class='Hi'>computer ethics. The paper introduces two major versions of the theory – those advanced by Amartya Sen and Martha Nussbaum – and argues that they offer potentially valuable conceptual tools for computer ethics. By developing a theory of value based on core human functionings and the capabilities (powers, freedoms) required to realise them, capability theory is shown to have a number of potential benefits that complement standard ethical theory, opening up new approaches to analysis and providing a framework that incorporates a justice as well as an ethics dimension. The underlying functionalism of capability theory is seen to be particularly appropriate to technology ethics, enabling the integration of normative and descriptive analysis of technology in terms of human needs and values. The paper concludes by considering some criticisms of the theory and directions for further development. (shrink)
This article presents an in-depth analysis of past and present publishing practices in academic computer science to suggest the establishment of a more consistent publishing standard. Historical precedent for academic publishing in computer science is established through the study of anecdotes as well as statistics collected from databases of published computer science papers. After examining these facts alongside information about analogous publishing situations and standards in other scientific fields, the article concludes with a list of basic principles (...) that should be adopted in any computer science publishing standard. These principles would contribute to the reliability and scientific nature of academic publications in computer science and would allow for more straightforward discourse in future publications. (shrink)
Abstract: Laws of computer science are prescriptive in nature but can have descriptive analogs in the physical sciences. Here, we describe a law of conservation of information in network programming, and various laws of computational motion (invariants) for programming in general, along with their pedagogical utility. Invariants specify constraints on objects in abstract computational worlds, so we describe language and data abstraction employed by software developers and compare them to Floridi's concept of levels of abstraction. We also consider Floridi's (...) structural account of reality and its fit for describing abstract computational worlds. Being abstract, such worlds are products of programmers' creative imaginations, so any "laws" in these worlds are easily broken. The worlds of computational objects need laws in the form of self-prescribed invariants, but the suspension of these laws might be creative acts. Bending the rules of abstract reality facilitates algorithm design, as we demonstrate through the example of search trees. (shrink)
We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed deductively or empirically? Are computer programs on a par with mathematical objects, with mere data, or with mental processes? We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within the discipline: (...) – The rationalist paradigm, which was common among theoretical computer scientists, defines computer science as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their ‘correctness’ by means of deductive reasoning. – The technocratic paradigm, promulgated mainly by software engineers and has come to dominate much of the discipline, defines computer science as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites. – The scientific paradigm, prevalent in the branches of artificial intelligence, defines computer science as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation. We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the claim that program-processes are on a par with mental processes. We conclude with a discussion in the influence that the technocratic paradigm has been having over computer science. (shrink)
The article shows where the argument of responsibility-gap regarding brain-computer interfaces acquires its plausibility from, and suggests why the argument is not plausible. As a way of an explanation, a distinction between the descriptive third-person perspective and the interpretative first-person perspective is introduced. Several examples and metaphors are used to show that ascription of agency and responsibility does not, even in simple cases, require that people be in causal control of every individual detail involved in an event. Taking up (...) the current debate on liability in BCI use, the article provides and discusses some rules that should be followed when potentially harmful BCI-based devices are brought from the laboratory into everyday life. (shrink)
In this paper, I examine the ethics of e - trust and e - trustworthiness in the context of health care, looking at direct computer-patient interfaces (DCPIs), information systems that provide medical information, diagnosis, advice, consenting and/or treatment directly to patients without clinicians as intermediaries. Designers, manufacturers and deployers of such systems have an ethical obligation to provide evidence of their trustworthiness to users. My argument for this claim is based on evidentialism about trust and trustworthiness: the idea that (...) trust should be based on sound evidence of trustworthiness. Evidence of trustworthiness is a broader notion than one might suppose, including not just information about the risks and performance of the system, but also interactional and context-based information. I suggest some sources of evidence in this broader sense that make it plausible that designers, manufacturers and deployers of DCPIs can provide evidence to users that is cognitively simple, easy to communicate, yet rationally connected with actual trustworthiness. (shrink)
Brain–computer interfacing (BCI) aims at directly capturing brain activity in order to enable a user to drive an application such as a wheelchair without using peripheral neural or motor systems. Low signal to noise ratio’s, low processing speed, and huge intra- and inter-subject variability currently call for the addition of intelligence to the applications, in order to compensate for errors in the production and/or the decoding of brain signals. However, the combination of minds and machines through BCI’s and intelligent (...) devices (IDs) can affect a user’s sense of agency. Particularly confusing cases can arise when the behavioral control switches implicitly from user to ID. I will suggest that in such situations users may be insecure about the extent to which the resulting behavior, whether successful or unsuccessful, is genuinely their own. Hence, while performing an action, a user of a BCI–ID may be uncertain about being the agent of the act. Several cases will be examined and some implications for (legal) responsibility (e.g. establishing the presence of a ‘guilty mind’) are discussed. (shrink)
The paper provides a critical review of thedebate on the foundations of Computer Ethics(CE). Starting from a discussion of Moor'sclassic interpretation of the need for CEcaused by a policy and conceptual vacuum, fivepositions in the literature are identified anddiscussed: the ``no resolution approach'',according to which CE can have no foundation;the professional approach, according to whichCE is solely a professional ethics; the radicalapproach, according to which CE deals withabsolutely unique issues, in need of a uniqueapproach; the conservative approach, accordingto which (...) CE is only a particular appliedethics, discussing new species of traditionalmoral issues; and the innovative approach,according to which theoretical CE can expandthe metaethical discourse with a substantiallynew perspective. In the course of the analysis,it is argued that, although CE issues are notuncontroversially unique, they are sufficientlynovel to render inadequate the adoption ofstandard macroethics, such as Utilitarianismand Deontologism, as the foundation of CE andhence to prompt the search for a robust ethicaltheory. Information Ethics (IE) is proposed forthat theory, as the satisfactory foundation forCE. IE is characterised as a biologicallyunbiased extension of environmental ethics,based on the concepts of information object/infosphere/entropy rather thanlife/ecosystem/pain. In light of the discussionprovided in this paper, it is suggested that CEis worthy of independent study because itrequires its own application-specific knowledgeand is capable of supporting a methodologicalfoundation, IE. (shrink)
In a historical perspective, what is novel about computer games is that they are not pure games but cultural objects which allow the playful desires identified by Caillois to be fused with craftsmanship, the desire to do a job well for its own sake (Sennett). Play is often defined in opposition to work, for example by Huizinga and Caillois, but craftsmanship has two qualities which can be found in both. Firstly, craftsmanship entails creative attention to the material at hand (...) pleasurably and patiently built up through rehearsal (cf. Sennett on “material consciousness”)—“creative” is used in a sense read from Bergson which is almost synonymous with “possibility-widening”. Secondly, craftsmanship entails the satisfaction of seeing the end result of one's labours. Both qualities are essential to human well-being (Marx, Sennett, Smith). (shrink)
Linear Logic is a branch of proof theory which provides refined tools for the study of the computational aspects of proofs. These tools include a duality-based categorical semantics, an intrinsic graphical representation of proofs, the introduction of well-behaved non-commutative logical connectives, and the concepts of polarity and focalisation. These various aspects are illustrated here through introductory tutorials as well as more specialised contributions, with a particular emphasis on applications to computer science: denotational semantics, lambda-calculus, logic programming and concurrency theory. (...) The volume is rounded-off by two invited contributions on new topics rooted in recent developments of linear logic. The book derives from a summer school that was the climax of the EU Training and Mobility of Researchers project 'Linear Logic in Computer Science'. It is an excellent introduction to some of the most active research topics in the area. (shrink)
Experiments (E), computer simulations (CS) and thought experiments (TE) are usually seen as playing different roles in science and as having different epistemologies. Accordingly, they are usually analyzed separately. We argue in this paper that these activities can contribute to answering the same questions by playing the same epistemic role when they are used to unfold the content of a well-described scenario. We emphasize that in such cases, these three activities can be described by means of the same conceptual (...) framework—even if each of them, because they involve different types of processes, fall under these concepts in different ways. We further illustrate our claims by presenting a threefold case study describing how a TE, a CS and an E were indeed used in the same role at different periods to answer the same questions about the possibility of a physical Maxwellian demon. We also point at fluid dynamics as another field where these activities seem to be playing the same unfolding role. We analyze the importance of unfolding as a general task of science and highlight how our description in terms of epistemic functions articulates in a noncommittal way with the epistemology of these three activities and accounts for their similarities and the existence of hybrid forms of activities. We finally emphasize that picturing these activities as functionally substitutable does not imply that they are epistemologically substitutable. (shrink)
In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in (...) a first step it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction. (shrink)
It is often claimed that scientists can obtain new knowledge about nature by running computer simulations. How is this possible? I answer this question by arguing that computer simulations are arguments. This view parallels Norton’s argument view about thought experiments. I show that computer simulations can be reconstructed as arguments that fully capture the epistemic power of the simulations. Assuming the extended mind hypothesis, I furthermore argue that running the computer simulation is to execute the reconstructing (...) argument. I discuss some objections and reject the view that computer simulations produce knowledge because they are experiments. I conclude by comparing thought experiments and computer simulations, assuming that both are arguments. (shrink)
In response to the attractive moral and politicalmodel of cosmopolitanism, this paper offers anoverview of some of the conceptual limitations to thatmodel arising from computer-mediated, interest-basedsocial interaction. I discuss James Bohman''sdefinition of the global and cosmopolitan spheres andhow computer-mediated communication might impact thedevelopment of those spheres. Additionally, I questionthe commitment to purely rational models of socialcooperation when theorizing a computer-mediated globalpublic sphere, exploring recent alternatives. Andfinally, I discuss a few of the political andepistemic constraints on participation in (...) thecomputer-mediated public sphere that threaten thecosmopolitan ideal.``Nature should be thanked for fostering socialincompatibility, enviously competitive vanity, andinsatiable desires for possessions and even power.Without these desires, all man''s excellent naturalcapacities would never be roused to develop.'''' Theultimate destiny for mankind, according to Kant whowrote these words in 1784, is to achieve through theuse of reason a `cosmopolitan existence'' or ``thematrix within which all the original capacities of thehuman race may develop.'''' Ironically, however, as Habermas andothers have realized, Kant''s carefully developedvision for `perpetual peace'' among nations and `worldcitizenship'' is now murky even as the electronicallymediated infrastructure of that matrix is rapidlydeveloping. Globalization as a process has intensifiedto the point where a new social, political, andeconomic condition has taken hold in the global arena.Recently this condition has been termed ``globality'''' –a term denoting a networked world characterized byspeed, mobility, risk, insecurity, andflexibility. And a debate is forming around thequestion of whether we are still in late modernity andexperiencing the culmination of modernity''s inherentlyglobalizing tendency or instead we have entered thenetworked age, in which the tension between collectiveand transformative identities and the networking logicof dominant institutions and organizations heralds theend of civil society. Inthis paper assume the latter but wish to explorefurther the political and epistemic constraints onparticipation in the computer-mediated public sphere.These constraints seem certain to impact the viabilityof a cosmopolitan public sphere. In the first sectionI shall discuss James Bohman''s definition of theglobal and cosmopolitan spheres and howcomputer-mediated communication (hereafter CMC) mightimpact the development of those spheres. In the secondsection, I question the commitment to purely rationalmodels of social cooperation when theorizing a globalpublic sphere. I explore recently proposed alternativeways of thinking about this issue in section three.And finally, I discuss a few of the political andepistemic constraints on participation in thecomputer-mediated public sphere that threaten thecosmopolitan ideal. (shrink)
Recent proposals for computer-assisted argumentation have drawn on dialectical models of argumentation. When used to assist public policy planning, such systems also raise questions of political legitimacy. Drawing on deliberative democratic theory, we elaborate normative criteria for deliberative legitimacy and illustrate their use for assessing two argumentation systems. Full assessment of such systems requires experiments in which system designers draw on expertise from the social sciences and enter into the policy deliberation itself at the level of participants.
This book introduces the critical concepts and debates that are shaping the emerging field of game studies. Exploring games in the context of cultural studies and media studies, it analyses computer games as the most popular contemporary form of new media production and consumption. The book: Argues for the centrality of play in redefining reading, consuming and creating culture Offers detailed research into the political economy of games to generate a model of new media production Examines the dynamics of (...) power in relation to both the production and consumption of computer games This is key reading for students, academics and industry practitioners in the fields of cultural studies, new media, media studies and game studies, as well as human-computer interaction and cyberculture. (shrink)
When faced with an ambiguous ethical situation related to computer technology (CT), the individual's course of action is influenced by personal experiences and opinions, consideration of what co-workers would do in the same situation, and an expectation of what the organization might sanction. In this article, the judgement of over three-hundred Association of Information Technology Professionals (AITP) members concerning the actions taken in a series of CT ethical scenarios are examined. Respondents expressed their personal judgement, as well as their (...) perception of their co-workers' judgement, and their understanding of the organization's judgement of the actions described in the scenarios. The findings show that there are differences in respondents' judgements for self, co-workers, and organization. Definitive patterns were also found between groups with and without organizational codes related to CT. (shrink)
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments.
Agent-based computer simulation and ethics Content Type Journal Article Category Book Review Pages 1-5 DOI 10.1007/s11016-012-9660-7 Authors Beckett Sterner, Conceptual and Historical Studies of Science, The University of Chicago, Social Sciences Building 205, 1126 E 59th St, Chicago, IL 60637, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Although organizations can derive competitive advantage from developing and implementing information systems, they are confronted with a rising number of unethical information practices. Because end-users and computer experts are the conduit to an ethical organizational environment, their intention to report unethical IT-related practices plays a critical role in protecting intellectual property and privacy rights. Using the survey methodology, this article investigates the relationship between willingness to report intellectual property and privacy violations and Machiavellianism, gender and computer literacy in (...) the form of programming experience. We found that gender and computer expertise interact with Machiavellianism to influence individuals’ intention of reporting unethical IT practices. This study helps us to improve our understanding of the emergent ethical issues existing in the IT-enabled decision environment. (shrink)
Resolving conflicts between different measurements ofa property of a physical system may be a key step in a discoveryprocess. With the emergence of large-scale databases and knowledgebases with property measurements, computer support for the task ofconflict resolution has become highly desirable. We will describe amethod for model-based conflict resolution and the accompanyingcomputer tool KIMA, which have been applied in a case-study inmaterials science. In order to be a useful aid to scientists, the toolneeds to be integrated with other tools (...) in a computer-supporteddiscovery environment. We will give an outline of such acomputer-supported discovery environment and argue that its use mightlead to new ways of doing science, so-called computer regimes. (shrink)
This essay presents and reflects upon the construction of a few experimental artworks, among them Caracolomobile , that looks for poetic, aesthetic and functional possibilities to bring computer systems to the sensitive universe of human emotions, feelings and expressions. Modern and Contemporary Art have explored such qualities in unfathomable ways and nowadays is turning towards computer systems and their co-related technologies. This universe characterizes and is the focus of these experimental artworks; artworks dealing with entwined subjective and objective (...) qualities, weaving perceptions, sensations and concepts. One of them, Caracolomobile, features an art installation creating a set up for an artificial robot that recognizes humans’ affective states and answers them with movements and sounds. The robot was installed over an artificial mirror lake in an open indigo-blue space surrounded by mirrors. It perceives and discriminates human emotional states and expressions using an interface developed with a non-intrusive neural headset (The neural headset used was developed by Emotiv Systems: http://www.emotiv.com . Accessed 11 August 2011). This artwork raises questions and looks for answers inquiring about the preliminary steps for the creation of artefacts that would conduct one to poetically experiment with affect, emotion, sensations and feelings in computational systems. Other works in progress ask about the poetic possibilities of mixing computational autonomous processes and behavioural robotic procedures (Arkin 1998 ) to create artificial environments mixed with humans. (shrink)
Human-computer interaction today has got a touch of magic: Without understanding the causal coherence, using a computer seems to become the art to use the right spell with the mouse as the magic wand â the sorcerer's staff. Goethes's poem admits an allegoric interpretation. We explicate the analogy between using a computer and casting a spell with emphasis on teaching magic skills. The art to create an ergonomic user interface has to take care of various levels of (...) skills for the human operators. The problem of logical discontinuities as opposed to continuous control is the most serious obstacle for human-computer interaction. (shrink)