This book constitutes the refereed proceedings of the Third International Symposium on Stochastic Algorithms: Foundations and Applications, SAGA 2005, held in Moscow, Russia in October 2005. The 14 revised full papers presented together with 5 invited papers were carefully reviewed and selected for inclusion in the book. The contributed papers included in this volume cover both theoretical as well as applied aspects of stochastic computations whith a special focus on new algorithmic ideas involving stochastic decisions and the design and (...) evaluation of stochastic algorithms within realistic scenarios. (shrink)
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments.
In the technical literature of computer science, the concept of an effective procedure is closely associated with the notion of an instruction that precisely specifies an action. Turing machine instructions are held up as providing paragons of instructions that "precisely describe" or "well define" the actions they prescribe. Numerical algorithms and computer programs are judged effective just insofar as they are thought to be translatable into Turing machine programs. Nontechnical procedures (e.g., recipes, methods) are summarily dismissed as (...) ineffective on the grounds that their instructions lack the requisite precision. But despite the pivotal role played by the notion of a precisely specified instruction in classifying procedures as effective and ineffective, little attention has been paid to the manner in which instructions "precisely specify" the actions they prescribe. It is the purpose of this paper to remedy this defect. The results are startling. The reputed exemplary precision of Turing machine instructions turns out to be a myth. Indeed, the most precise specifications of action are provided not by the procedures of theoretical computer science and mathematics (algorithms) but rather by the nontechnical procedures of everyday life. I close with a discussion of some of the rumifications of these conclusions for understanding and designing concrete computers and their programming languages. (shrink)
This book constitutes the refereed proceedings of the 8th International Conference on Theory and Applications of Satisfiability Testing, SAT 2005, held in St Andrews, Scotland in June 2005. The 26 revised full papers presented together with 16 revised short papers presented as posters during the technical programme were carefully selected from 73 submissions. The whole spectrum of research in propositional and quantified Boolean formula satisfiability testing is covered including proof systems, search techniques, probabilistic analysis of algorithms and their properties, (...) problem encodings, industrial applications, specific tools, case studies, and empirical results. (shrink)
This book constitutes the refereed proceedings of the 7th International Conference on Theory and Applications of Satisfiability Testing, SAT 2004, held in Vancouver, BC, Canada in May 2004. The 24 revised full papers presented together with 2 invited papers were carefully selected from 72 submissions. In addition there are 2 reports on the 2004 SAT Solver Competition and the 2004 QBF Solver Evaluation. The whole spectrum of research in propositional and quantified Boolean formula satisfiability testing is covered; bringing together the (...) fields of theoretical and experimental computer science as well as the many relevant application areas. (shrink)
This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over thousand (...) articles published in leading journals and conference proceedings in the C&IE field was constructed. With the help of various computeralgorithms, key terms were identified in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the co-occurrence frequencies, the term map was constructed. This was done using a computer program called VOSviewer. The term map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three main concepts, namely privacy, ethics, and the Internet. (shrink)
A proof of ‘correctness’ for a mathematical algorithm cannot be relevant to executions of a program based on that algorithm because both the algorithm and the proof are based on assumptions that do not hold for computations carried out by real-world computers. Thus, proving the ‘correctness’ of an algorithm cannot establish the trustworthiness of programs based on that algorithm. Despite the (deceptive) sameness of the notations used to represent them, the transformation of an algorithm into an executable program is a (...) wrenching metamorphosis that changes a mathematical abstraction into a prescription for concrete actions to be taken by real computers. Therefore, it is verification of program executions (processes) that is needed, not of program texts that are merely the scripts for those processes. In this view, verification is the empirical investigation of: (a) the behavior that programs invoke in a computer system and (b) the larger context in which that behavior occurs. Here, deduction can play no more, and no less, a role than it does in the empirical sciences. (shrink)
This book constitutes the refereed proceedings of the 14th International Conference on Theory and Applications of Satisfiability Testing, SAT 2011, held in Ann Arbor, MI, USA in June 2011.The 25 revised full papers presented together with ...
In this paper I attempt to cast the current program verification debate within a more general perspective on the methodologies and goals of computer science. I show, first, how any method involved in demonstrating the correctness of a physically executing computer program, whether by testing or formal verification, involves reasoning that is defeasible in nature. Then, through a delineation of the senses in which programs can be run as tests, I show that the activities of testing and formal (...) verification do not necessarily share the same goals and thus do not always constitute alternatives. The testing of a program is not always intended to demonstrate a program's correctness. Testing may seek to accept or reject nonprograms including algorithms, specifications, and hypotheses regarding phenomena. The relationship between these kinds of testing and formal verification is couched in a more fundamental relationship between two views of computer science, one properly containing the other. (shrink)
There are many algorithm texts that provide lots of well-polished code and proofs of correctness. Instead, this book presents insights, notations, and analogies to help the novice describe and think about algorithms like an expert. By looking at both the big picture and easy step-by-step methods for developing algorithms, the author helps students avoid the common pitfalls. He stresses paradigms such as loop invariants and recursion to unify a huge range of algorithms into a few meta-algorithms. (...) Part of the goal is to teach the students to think abstractly. Without getting bogged with formal proofs, the book fosters a deeper understanding of how and why each algorithm works. These insights are presented in a slow and clear manner accessible to second- or third-year students of computer science, preparing them to find their own innovative ways to solve problems. (shrink)
A fixed-parameter is an algorithm that provides an optimal solution to a combinatorial problem. This research-level text is an application-oriented introduction to the growing and highly topical area of the development and analysis of efficient fixed-parameter algorithms for hard problems. The book is divided into three parts: a broad introduction that provides the general philosophy and motivation; followed by coverage of algorithmic methods developed over the years in fixed-parameter algorithmics forming the core of the book; and a discussion of (...) the essential from parameterized hardness theory with a focus on W -hardness, which parallels NP-hardness, then stating some relations to polynomial-time approximation algorithms, and finishing up with a list of selected case studies to show the wide range of applicability of the presented methodology. Aimed at graduate and research mathematicians, programmers, algorithm designers and computer scientists, the book introduces the basic techniques and results and provides a fresh view on this highly innovative field of algorithmic research. (shrink)
The formulas-as-types isomorphism tells us that every proof and theorem, in the intuitionistic implicational logic $H_\rightarrow$, corresponds to a lambda term or combinator and its type. The algorithms of Bunder very efficiently find a lambda term inhabitant, if any, of any given type of $H_\rightarrow$ and of many of its subsystems. In most cases the search procedure has a simple bound based roughly on the length of the formula involved. Computer implementations of some of these procedures were done (...) in Dekker. In this paper we extend these methods to full classical propositional logic as well as to its various subsystems. This extension has partly been implemented by Oostdijk. (shrink)
After the novel, and subsequently cinema privileged narrative as the key form of cultural expression of the modern age, the computer age introduces its correlate — database. Why does new media favour database form over others? Can we explain ist popularity by analysing the specificity of the digital medium and of computer programming? What is the relationship between database and another form, which has traditionally dominated human culture — narrative? In addressing these questions, I discuss the connection between (...)computer's ontology — the way software represents the world — and the new cultural forms privileged by computer culture such as database. I propose that computerisation of culture involves projection of two fundamental parts of computer software — data structures and algorithms — onto the cultural sphere. Thus CD-ROMs and Web databases are cultural manifestations of one half of this ontology — data structures; while new media narratives are manifestations of the second part — algorithms. I conclude by proposing that in computer culture database and narrative do not have the same status. Given that on the level of data organisation most new media objects are databases, it is not surprising that on the level of form database also dominates new media culture. (shrink)
In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in (...) polynomial time. Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon , even more important than it is today. I argue that the way in which we experience ourselves necessarily (...) goes through a moment of technical mediation. Because of this algorithmic profiling that thrives on continuous reconfiguration of identification should not be understood as a supplementary process which maps a pre-established identity that exists independently from the profiling practice. In order to clarify how the experience of one’s identity can become affected by such machine-profiling a theoretical exploration of identity is made (including Agamben’s understanding of an apparatus , Ricoeur’s distinction between idem - and ipse -identity, and Stiegler’s notion of a conjunctive–disjunctive relationship towards retentional apparatuses ). Although it is clear that no specific predictions about the impact of Ambient Intelligent technologies can be made without taking more particulars into account, the theoretical concepts are used to describe three general scenarios about the way wherein the experience of identity might become affected. To conclude, I argue that the experience of one’s identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern. (shrink)
Abstract: Laws of computer science are prescriptive in nature but can have descriptive analogs in the physical sciences. Here, we describe a law of conservation of information in network programming, and various laws of computational motion (invariants) for programming in general, along with their pedagogical utility. Invariants specify constraints on objects in abstract computational worlds, so we describe language and data abstraction employed by software developers and compare them to Floridi's concept of levels of abstraction. We also consider Floridi's (...) structural account of reality and its fit for describing abstract computational worlds. Being abstract, such worlds are products of programmers' creative imaginations, so any "laws" in these worlds are easily broken. The worlds of computational objects need laws in the form of self-prescribed invariants, but the suspension of these laws might be creative acts. Bending the rules of abstract reality facilitates algorithm design, as we demonstrate through the example of search trees. (shrink)
Mathematical Logic for Computer Science is a mathematics textbook with theorems and proofs, but the choice of topics has been guided by the needs of computer science students. The method of semantic tableaux provides an elegant way to teach logic that is both theoretically sound and yet sufficiently elementary for undergraduates. To provide a balanced treatment of logic, tableaux are related to deductive proof systems.The logical systems presented are:- Propositional calculus (including binary decision diagrams);- Predicate calculus;- Resolution;- Hoare (...) logic;- Z;- Temporal logic.Answers to exercises (for instructors only) as well as Prolog source code for algorithms may be found via the Springer London web site: http://www.springer.com/978-1-85233-319-5 Mordechai Ben-Ari is an associate professor in the Department of Science Teaching of the Weizmann Institute of Science. He is the author of numerous textbooks on concurrency, programming languages and logic, and has developed software tools for teaching concurrency. In 2004, Ben-Ari received the ACM/SIGCSE Award for Outstanding Contributions to Computer Science Education. (shrink)
Types now play an essential role in computer science; their ascent originates from Principia Mathematica. Type checking and type inference algorithms are used to prevent semantic errors in programs, and type theories are the native language of several major interactive theorem provers. Some of these trace key features back to Principia.
This article reviews the strengths and limitations of five major paradigms of medical computer-assisted decision making (CADM): (1) clinical algorithms, (2) statistical analysis of collections of patient data, (3) mathematical models of physical processes, (4) decision analysis, and (5) symbolic reasoning or artificial intelligence (Al). No one technique is best for all applications, and there is recent promising work which combines two or more established techniques. We emphasize both the inherent power of symbolic reasoning and the promise of (...) artificial intelligence and the other techniques to complement each other. Keywords: Diagnosis, Computer Assisted Decision Making, Artificial Intelligence * Current address: Intelligenetics, 124 University Avenue, Palo Alto, CA. 94301, U.S.A. ** Dr. Shortliffe is a Henry J. Kaiser Family Foundation Faculty Scholar in General Internal Medicine and recipient of research career development award LM0048 from the National Library of Medicine. CiteULike Connotea Del.icio.us What's this? (shrink)
This article reviews the strengths and limitations of five major paradigms of medical computer-assisted decision making (CADM): (1) clinical algorithms, (2) statistical analysis of collections of patient data, (3) mathematical models of physical processes, (4) decision analysis, and (5) symbolic reasoning or artificial intelligence (Al). No one technique is best for all applications, and there is recent promising work which combines two or more established techniques. We emphasize both the inherent power of symbolic reasoning and the promise of (...) artificial intelligence and the other techniques to complement each other. (shrink)
The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon, even more important than it is today. I argue that the way in which we experience ourselves necessarily goes (...) through a moment of technical mediation. Because of this algorithmic profiling that thrives on continuous reconfiguration of identification should not be understood as a supplementary process which maps a pre-established identity that exists independently from the profiling practice. In order to clarify how the experience of one’s identity can become affected by such machine-profiling a theoretical exploration of identity is made (including Agamben’s understanding of an apparatus, Ricoeur’s distinction between idem- and ipse-identity, and Stiegler’s notion of a conjunctive–disjunctive relationship towards retentional apparatuses). Although it is clear that no specific predictions about the impact of Ambient Intelligent technologies can be made without taking more particulars into account, the theoretical concepts are used to describe three general scenarios about the way wherein the experience of identity might become affected. To conclude, I argue that the experience of one’s identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern. (shrink)
Part I presents a model of interactive computation and a metric for expressiveness, Part II relates interactive models of computation to physics, and Part III considers empirical models from a philosophical perspective. Interaction machines, which extend Turing Machines to interaction, are shown in Part I to be more expressive than Turing Machines by a direct proof, by adapting Gödel's incompleteness result, and by observability metrics. Observation equivalence provides a tool for measuring expressiveness according to which interactive systems are more expressive (...) than algorithms. Refinement of function equivalence by observation of outer interactive behavior and inner computation steps is examined. The change of focus from algorithms specified by computable functions to interaction specified by observation equivalence captures the essence of empirical computer science. Part II relates interaction in models of computation to observation in the natural sciences. Explanatory power in physics is specified by the same observability metric as expressiveness in interactive systems. Realist models of inner structure are characterized by induction, abduction, and Occam's Razor. Interactive realism extends the hidden-variable model of Einstein to hidden interfaces that provide extra degrees of freedom to formulate hypotheses with testable predictions conforming with quantum theory. Greater expressiveness of collaborative computational observers (writers) than single observers implies that hidden-interface models are more expressive than hidden-variable models. By providing a common foundation for empirical computational and physical models we can use precise results about computational models to establish properties of physical models. Part III shows that the evolution in computing from algorithms to interaction parallels that in physics from rationalism to empiricism. Plato's cave metaphor is interactively extended from Platonic rationalism to empiricism. The Turing test is extended to TMs with hidden interfaces that express interactive thinking richer than the traditional Turing test. Interactive (nonmonotonic) extensions of logic such as the closed-world assumption suggest that interactiveness is incompatible with monotonic logical inference. Procedure call, atomicity of transactions, and taking a fixed point are techniques for closing open systems similar to "preparation" followed by "observation" of a physical system. Pragmatics is introduced as a framework for extending logical models with a fixed syntax and semantics to multiple-interface models that support collaboration among clients sharing common resources. (shrink)
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computeralgorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored.Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerised lexicons for (...) the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects. (shrink)
Hi everybody! It's a great pleasure for me to be back here at the new, improved Santa Fe Institute in this spectacular location. I guess this is my fourth visit and it's always very stimulating, so I'm always very happy to visit you guys. I'd like to tell you what I've been up to lately. First of all, let me say what algorithmic information theory is good for, before telling you about the new version of it I've got.
A complete reconstruction of Lehmer’s ENIAC set-up for computing the exponents of p modulo two is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).
The computer revolution can beusefully divided into three stages, two ofwhich have already occurred: the introductionstage and the permeation stage. We have onlyrecently entered the third and most importantstage – the power stage – in which many ofthe most serious social, political, legal, andethical questions involving informationtechnology will present themselves on a largescale. The present article discusses severalreasons to believe that future developments ininformation technology will make computerethics more vibrant and more important thanever. Computer ethics is here to (...) stay! (shrink)
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond (...) to a prominent argument against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computer science. (shrink)
Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computer science and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For instance, (...) should the mind be modeled by digital computers, or by parallel-processing systems more like brains? Do computer programs consist of meaningless patterns, or do they embody (and explain) genuine meaning? (shrink)
Morrison points out many similarities between the roles of simulation models and other sorts of models in science. On the basis of these similarities she claims that running a simulation is epistemologically on a par with doing a traditional experiment and that the output of a simulation therefore counts as a measurement. I agree with her premises but reject the inference. The epistemological payoff of a traditional experiment is greater (or less) confidence in the fit between a model and a (...) target system. The source of this payoff is the existence of a causal interaction with the target system. A computer experiment, which does not go beyond the simulation system itself, lacks any such interaction. So computer experiments cannot confer any additional confidence in the fit (or lack thereof) between the simulation model and the target system. (shrink)
This article discusses some``historical milestones'' in computer ethics, aswell as two alternative visions of the futureof computer ethics. Topics include theimpressive foundation for computer ethics laiddown by Norbert Wiener in the 1940s and early1950s; the pioneering efforts of Donn Parker,Joseph Weizenbaum and Walter Maner in the1970s; Krystyna Gorniak's hypothesis thatcomputer ethics will evolve into ``globalethics''; and Deborah Johnson's speculation thatcomputer ethics may someday ``disappear''.
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of (...) detection procedures. I argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases. (shrink)
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J (...) Artif Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
If the universe is a machine, consciousness is not possible. If the universe is more than a machine, then physics is incomplete. Since we are both part of the universe and conscious, physics must be incomplete and the understanding required to construct conscious mechanisms must be sought through the advancement of physics not the continued application of inadequate concepts. In this paper I will show that an impediment to this advancement is the confusion arising through the use of terms such (...) as 'physical reality' to refer to an absolute a priori Kantian 'Ding an Sich' when they should both be recognized as referring to data structures holding the knowledge upon which we act and nothing more. Once this confusion has been clarified, I will go on to suggest that the cycle of activity updating physical reality becomes a candidate for a conscious process. I will show how implementing algorithms in modern computers can mimic this process but if actual consciousness is to be achieved the update activity must correspond to a cycle in time. Such cycles have been identified with Whitehead's 'actual occasions' and thus I will argue that fundamental events should replace fundamental particles as the building blocks of the universe if consciousness is to be explained. (shrink)
Many philosophical and public discussions of the ethical aspects of violent computer games typically centre on the relation between playing violent videogames and its supposed direct consequences on violent behaviour. But such an approach rests on a controversial empirical claim, is often one-sided in the range of moral theories used, and remains on a general level with its focus on content alone. In response to these problems, I pick up Matt McCormick’s thesis that potential harm from playing computer (...) games is best construed as harm to one’s character, and propose to redirect our attention to the question how violent computer games influence the moral character of players. Inspired by the work of Martha Nussbaum, I sketch a positive account of how computer games can stimulate an empathetic and cosmopolitan moral development. Moreover, rather than making a general argument applicable to a wide spectrum of media, my concern is with specific features of violent computer games that make them especially morally problematic in terms of empathy and cosmopolitanism, features that have to do with the connections between content and medium, and between virtuality and reality. I also discuss some remaining problems. In this way I hope contribute to a less polarised discussion about computer games that does justice to the complexity of their moral dimension, and to offer an account that is helpful to designers, parents, and other stakeholders. (shrink)
We characterize abstraction in computer science by first comparing the fundamental nature of computer science with that of its cousin mathematics. We consider their primary products, use of formalism, and abstraction objectives, and find that the two disciplines are sharply distinguished. Mathematics, being primarily concerned with developing inference structures, has information neglect as its abstraction objective. Computer science, being primarily concerned with developing interaction patterns, has information hiding as its abstraction objective. We show that abstraction through information (...) hiding is a primary factor in computer science progress and success through an examination of the ubiquitous role of information hiding in programming languages, operating systems, network architecture, and design patterns. (shrink)
The essays included in the special issue dedicated to the philosophy of computer science examine new philosophical questions that arise from reflection upon conceptual issues in computer science and the insights such an enquiry provides into ongoing philosophical debates.
Changes in information technologylead to new topics and new emphases in computerethics. The present article examines a varietyof such issues, and argues that computer ethicsmust become more rigorous and develop astronger theoretical base. The articleconcludes with a discussion of ways to makecomputer ethics more effective in bringinghelpful changes to the world.
This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic: computers are used as information-processing and problem-solving tools that extend human cognition, thereby creating hybrid cognitive systems consisting of a human processor and an artificial processor that process information in tandem. In this role, computer systems (...) extend human cognition. Next, it is argued that in recent years, the epistemic relation between humans and computers has been supplemented by an ontic relation. Current computer systems are able to simulate virtual and social environments that extend the interactive possibilities found in the physical environment. This type of relationship is primarily ontic, and extends to objects and places that have a virtual ontology. Increasingly, computers are not just information devices, but portals to worlds that we inhabit. The aforementioned epistemic and ontic relationships are unique to information technology and distinguish human-computer relationships from other human-technology relationships. (shrink)
Do computers have beliefs? I argue that anyone who answers in the affirmative holds a view that is incompatible with what I shall call the commonsense approach to the propositional attitudes. My claims shall be two. First,the commonsense view places important constraints on what can be acknowledged as a case of having a belief. Second, computers – at least those for which having a belief would be conceived as having a sentence in a belief box – fail to satisfy some (...) of these constraints. This second claim can best be brought out in the context of an examination of the idea of computer self-knowledge and self-deception, but the conclusion is perfectly general: the idea that computers are believers, like the idea that computers could have self-knowledge or be self-deceived, is incompatible with the commonsense view. The significance of the argument lies in the choice it forces on us: whether to revise our notion of belief so as to accommodate the claim that computers are believers, or to give up on that claim so as to preserve our pretheoretic notion of the attitudes. We cannot have it both ways. (shrink)
Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. (...) In particular, it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities. (shrink)
Issue Title: Moral Luck, Social Networking Sites, and Trust on the Web I argue that the problem of 'moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its 'logical malleability', leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down (...) most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.[PUBLICATION ABSTRACT]. (shrink)