This article presents an in-depth analysis of past and present publishing practices in academic computerscience to suggest the establishment of a more consistent publishing standard. Historical precedent for academic publishing in computerscience is established through the study of anecdotes as well as statistics collected from databases of published computerscience papers. After examining these facts alongside information about analogous publishing situations and standards in other scientific fields, the article concludes with a list (...) of basic principles that should be adopted in any computerscience publishing standard. These principles would contribute to the reliability and scientific nature of academic publications in computerscience and would allow for more straightforward discourse in future publications. (shrink)
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computerscience: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computerscience (...) to respond to a prominent argument against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computerscience. (shrink)
We characterize abstraction in computerscience by first comparing the fundamental nature of computerscience with that of its cousin mathematics. We consider their primary products, use of formalism, and abstraction objectives, and find that the two disciplines are sharply distinguished. Mathematics, being primarily concerned with developing inference structures, has information neglect as its abstraction objective. Computerscience, being primarily concerned with developing interaction patterns, has information hiding as its abstraction objective. We show that (...) abstraction through information hiding is a primary factor in computerscience progress and success through an examination of the ubiquitous role of information hiding in programming languages, operating systems, network architecture, and design patterns. (shrink)
The essays included in the special issue dedicated to the philosophy of computerscience examine new philosophical questions that arise from reflection upon conceptual issues in computerscience and the insights such an enquiry provides into ongoing philosophical debates.
In this paper I attempt to cast the current program verification debate within a more general perspective on the methodologies and goals of computerscience. I show, first, how any method involved in demonstrating the correctness of a physically executing computer program, whether by testing or formal verification, involves reasoning that is defeasible in nature. Then, through a delineation of the senses in which programs can be run as tests, I show that the activities of testing and (...) formal verification do not necessarily share the same goals and thus do not always constitute alternatives. The testing of a program is not always intended to demonstrate a program's correctness. Testing may seek to accept or reject nonprograms including algorithms, specifications, and hypotheses regarding phenomena. The relationship between these kinds of testing and formal verification is couched in a more fundamental relationship between two views of computerscience, one properly containing the other. (shrink)
Computerscience is an engineering science whose objective is to determine how to best control interactions among computational objects. We argue that it is a fundamental computerscience value to design computational objects so that the dependencies required by their interactions do not result in couplings, since coupling inhibits change. The nature of knowledge in any science is revealed by how concepts in that science change through paradigm shifts, so we analyze classic paradigm (...) shifts in both natural and computerscience in terms of decoupling. We show that decoupling pervades computerscience both at its core and in the wider context of computing at large, and lies at the very heart of computerscience’s value system. (shrink)
Abstract: Laws of computerscience are prescriptive in nature but can have descriptive analogs in the physical sciences. Here, we describe a law of conservation of information in network programming, and various laws of computational motion (invariants) for programming in general, along with their pedagogical utility. Invariants specify constraints on objects in abstract computational worlds, so we describe language and data abstraction employed by software developers and compare them to Floridi's concept of levels of abstraction. We also consider (...) Floridi's structural account of reality and its fit for describing abstract computational worlds. Being abstract, such worlds are products of programmers' creative imaginations, so any "laws" in these worlds are easily broken. The worlds of computational objects need laws in the form of self-prescribed invariants, but the suspension of these laws might be creative acts. Bending the rules of abstract reality facilitates algorithm design, as we demonstrate through the example of search trees. (shrink)
We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computerscience a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed deductively or empirically? Are computer programs on a par with mathematical objects, with mere data, or with mental processes? We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within (...) the discipline: – The rationalist paradigm, which was common among theoretical computer scientists, defines computerscience as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their ‘correctness’ by means of deductive reasoning. – The technocratic paradigm, promulgated mainly by software engineers and has come to dominate much of the discipline, defines computerscience as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites. – The scientific paradigm, prevalent in the branches of artificial intelligence, defines computerscience as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation. We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the claim that program-processes are on a par with mental processes. We conclude with a discussion in the influence that the technocratic paradigm has been having over computerscience. (shrink)
Linear Logic is a branch of proof theory which provides refined tools for the study of the computational aspects of proofs. These tools include a duality-based categorical semantics, an intrinsic graphical representation of proofs, the introduction of well-behaved non-commutative logical connectives, and the concepts of polarity and focalisation. These various aspects are illustrated here through introductory tutorials as well as more specialised contributions, with a particular emphasis on applications to computerscience: denotational semantics, lambda-calculus, logic programming and concurrency (...) theory. The volume is rounded-off by two invited contributions on new topics rooted in recent developments of linear logic. The book derives from a summer school that was the climax of the EU Training and Mobility of Researchers project 'Linear Logic in ComputerScience'. It is an excellent introduction to some of the most active research topics in the area. (shrink)
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computerscience is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computerscience and describes a course covering those topics, along with suggested readings and assignments.
In the paper some applications of Gödel's incompleteness theorems to discussions of problems of computerscience are presented. In particular the problem of relations between the mind and machine (arguments by J.J.C. Smart and J.R. Lucas) is discussed. Next Gödel's opinion on this issue is studied. Finally some interpretations of Gödel's incompleteness theorems from the point of view of the information theory are presented.
Since the birth of computing as an academic discipline, the disciplinary identity of computing has been debated fiercely. The most heated question has concerned the scientific status of computing. Some consider computing to be a natural science and some consider it to be an experimental science. Others argue that computing is bad science, whereas some say that computing is not a science at all. This survey article presents viewpoints for and against computing as a science. (...) Those viewpoints are analyzed against basic positions in the philosophy of science. The article aims at giving the reader an overview, background, and a historical and theoretical frame of reference for understanding and interpreting some central questions in the debates about the disciplinary identity of computerscience. The article argues that much of the discussion about the scientific nature of computing is misguided due to a deep conceptual uncertainty about science in general as well as computing in particular. (shrink)
The author has surveyed a quarter of the accredited undergraduate computerscience programs in the United States. More than half of these programs offer a “social and ethical implications of computing” course taught by a computerscience faculty member, and there appears to be a trend toward teaching ethics classes within computerscience departments. Although the decision to create an “in house” computer ethics course may sometimes be a pragmatic response to pressure from (...) the accreditation agency, this paper argues that teaching ethics within a computerscience department can provide students and faculty members with numerous benefits. The paper lists topics that can be covered in a computer ethics course and offers some practical suggestions for making the course successful. (shrink)
At a conference, two engineering professors and a philosophy professor discussed the teaching of ethics in engineering and computerscience. The panelists considered the integration of material on ethics into technical courses, the role of ethical theory in teaching applied ethics, the relationship between cases and codes of ethics, the enlisting of support of engineering faculty, the background needed to teach ethics, and the assessment of student outcomes. Several audience members contributed comments, particularly on teaching ethical theory and (...) on student assessment. (shrink)
This paper describes the major components of ImpactCS, a program to develop strategies and curriculum materials for integrating social and ethical considerations into the computerscience curriculum. It presents, in particular, the content recommendations of a subcommittee of ImpactCS; and it illustrates the interdisciplinary nature of the field, drawing upon concepts from computerscience, sociology, philosophy, psychology, history and economics.
My purpose in this essay is to clarify the notion of explanation by computer simulation in artificial intelligence and cognitive science. My contention is that computer simulation may be understood as providing two different kinds of explanation, which makes the notion of explanation by computer simulation ambiguous. In order to show this, I shall draw a distinction between two possible ways of understanding the notion of simulation, depending on how one views the relation in which a (...) computing system that performs a cognitive task stands to the program that the system runs while performing that task. Next, I shall suggest that the kind of explanation that results from simulation is radically different in each case. In order to illustrate the difference, I will point out some prima facie methodological difficulties that need to be addressed in order to ensure that simulation plays a legitimate explanatory role in cognitive science, and I shall emphasize how those difficulties are very different depending on the notion of explanation involved. (shrink)
This book looks at the ways in which conditionals, an integral part of philosophy and logic, can be of practical use in computer programming. It analyzes the different types of conditionals, including their applications and potential problems. Other topics include defeasible logics, the Ramsey test, and a unified view of consequence relation and belief revision. Its implications will be of interest to researchers in logic, philosophy, and computerscience, particularly artificial intelligence.
Investigations into inter-level relations in computerscience, biology and psychology call for an *empirical* turn in the philosophy of mind. Rather than concentrate on *a priori* discussions of inter-level relations between 'completed' sciences, a case is made for the actual study of the way inter-level relations grow out of the developing sciences. Thus, philosophical inquiries will be made more relevant to the sciences, and, more importantly, philosophical accounts of inter-level relations will be testable by confronting them with what (...) really happens in science. Hence, close observation of the ever-changing reduction relations in the developing sciences, and revision of philosophical positions based on these empirical observations, may, in the long run, be more conducive to an adequate understanding of inter-level relations than a traditional *a priori* approach. (shrink)
Most of the papers in this collection are from the First International Workshop on Deontic Logic in ComputerScience, DEON91, held in Amsterdam in December 1991. AI (especially AI and law, and knowledge representation) and formal system specification are the computerscience communities that would seem to be most interested. In fact, this reviewer, a researcher in AI, was surprised to find common ground with a visiting researcher in distributed systems by discussing the contents of this (...) book: he being in the same field as Wieringa, and I being in the same field as Meyer. (shrink)
Dialogue theory, although it has ancient roots, was put forward in the 1970s in logic as astructure that can be useful for helping to evaluate argumentation and informal fallacies.Recently, however, it has been taken up as a broader subject of investigation in computerscience. This paper surveys both the historical and philosophical background of dialoguetheory and the latest research initiatives on dialogue theory in computerscience. The main components of dialogue theory are briefly explained. Included is a classification of (...) the main types of dialogue that, it is argued, should provide the central focus for studying many important dialogue contexts in specific cases. Following these three surveys, a concluding prediction is made about the direction dialogue theory is likely to take in the next century, especially in relation to the growing field of communication studies. (shrink)
Mathematical Logic for ComputerScience is a mathematics textbook with theorems and proofs, but the choice of topics has been guided by the needs of computerscience students. The method of semantic tableaux provides an elegant way to teach logic that is both theoretically sound and yet sufficiently elementary for undergraduates. To provide a balanced treatment of logic, tableaux are related to deductive proof systems.The logical systems presented are:- Propositional calculus (including binary decision diagrams);- Predicate calculus;- (...) Resolution;- Hoare logic;- Z;- Temporal logic.Answers to exercises (for instructors only) as well as Prolog source code for algorithms may be found via the Springer London web site: http://www.springer.com/978-1-85233-319-5 Mordechai Ben-Ari is an associate professor in the Department of Science Teaching of the Weizmann Institute of Science. He is the author of numerous textbooks on concurrency, programming languages and logic, and has developed software tools for teaching concurrency. In 2004, Ben-Ari received the ACM/SIGCSE Award for Outstanding Contributions to ComputerScience Education. (shrink)
Computerscience only became established as a field in the 1950s, growing out of theoretical and practical research begun in the previous two decades. The field has exhibited immense creativity, ranging from innovative hardware such as the early mainframes to software breakthroughs such as programming languages and the Internet. Martin Gardner worried that "it would be a sad day if human beings, adjusting to the Computer Revolution, became so intellectually lazy that they lost their power of creative (...) thinking" (Gardner, 1978, p. vi-viii). On the contrary, computers and the theory of computation have provided great opportunities for creative work. This chapter examines several key aspects of creativity in computerscience, beginning with the question of how problems arise in computerscience. We then discuss the use of analogies in solving key problems in the history of computerscience. Our discussion in these sections is based on historical examples, but the following sections discuss the nature of creativity using information from a contemporary source, a set of interviews with practicing computer scientists collected by the Association of Computing Machinery’s on-line student magazine, Crossroads. We then provide a general comparison of creativity in computerscience and in the natural sciences. (shrink)
Taking Brian Cantwell Smith’s study, “Limits of Correctness in Computers,” as its point of departure, this article explores the role of models in computerscience. Smith identifies two kinds of models that play an important role, where specifications are models of problems and programs are models of possible solutions. Both presuppose the existence of conceptualizations as ways of conceiving the world “in certain delimited ways.” But high-level programming languages also function as models of virtual (or abstract) machines, while (...) low-level programming languages function as models of causal (or physical) machines. The resulting account suggests that sets of models embedded within models are indispensable for computer programming. (shrink)
The integration of computerscience, biology, and engineering has resulted in the emergence of rapidly growing interdisciplinary fields such as bioinformatics, bioengineering, DNA computing, and systems and synthetic biology. Ideas derived from computerscience and engineering can provide innovative solutions to biological problems and advance research in new directions. Although interdisciplinary research has become increasingly prevalent in recent years, the scientists contributing to these efforts largely remain specialists in their original disciplines and are not fully capable (...) of covering the many facets of multidisciplinary problems, which impedes the development of truly integrated solutions. It would be .. (shrink)
An excessive preoccupation with formalism is impeding the development of computerscience. Form-content confusion is discussed relative to three areas: theory of computation, programming languages, and education.
We review some of the history of the computability theory of functionals of higher types, and we will demonstrate how contributions from logic and theoretical computerscience have shaped this still active subject.
Types now play an essential role in computerscience; their ascent originates from Principia Mathematica. Type checking and type inference algorithms are used to prevent semantic errors in programs, and type theories are the native language of several major interactive theorem provers. Some of these trace key features back to Principia.
This paper discusses how facet-like structures occur as a commonplace feature in a variety of computerscience disciplines as a means for structuring class hierarchies. The paper then focuses on a mathematical model for facets (and class hierarchies in general), called formal concept analysis, and discusses graphical representations of faceted systems based on this model.
This reader contains the extended abstracts of the seminars organised for the “ComputerScience and IT with/for Biology” Seminar Series, held at the Faculty of ComputerScience, Free University of Bozen-Bolzano, from October to December 2005. Slides of the presentations are available online at: www.inf.unibz.it/krdb/biology.
There are many branches of philosophy called "the philosophy of X," where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computerscience is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computerscience and describes a course covering those topics, along with suggested readings and assignments.
Part I presents a model of interactive computation and a metric for expressiveness, Part II relates interactive models of computation to physics, and Part III considers empirical models from a philosophical perspective. Interaction machines, which extend Turing Machines to interaction, are shown in Part I to be more expressive than Turing Machines by a direct proof, by adapting Gödel's incompleteness result, and by observability metrics. Observation equivalence provides a tool for measuring expressiveness according to which interactive systems are more expressive (...) than algorithms. Refinement of function equivalence by observation of outer interactive behavior and inner computation steps is examined. The change of focus from algorithms specified by computable functions to interaction specified by observation equivalence captures the essence of empirical computerscience. Part II relates interaction in models of computation to observation in the natural sciences. Explanatory power in physics is specified by the same observability metric as expressiveness in interactive systems. Realist models of inner structure are characterized by induction, abduction, and Occam's Razor. Interactive realism extends the hidden-variable model of Einstein to hidden interfaces that provide extra degrees of freedom to formulate hypotheses with testable predictions conforming with quantum theory. Greater expressiveness of collaborative computational observers (writers) than single observers implies that hidden-interface models are more expressive than hidden-variable models. By providing a common foundation for empirical computational and physical models we can use precise results about computational models to establish properties of physical models. Part III shows that the evolution in computing from algorithms to interaction parallels that in physics from rationalism to empiricism. Plato's cave metaphor is interactively extended from Platonic rationalism to empiricism. The Turing test is extended to TMs with hidden interfaces that express interactive thinking richer than the traditional Turing test. Interactive (nonmonotonic) extensions of logic such as the closed-world assumption suggest that interactiveness is incompatible with monotonic logical inference. Procedure call, atomicity of transactions, and taking a fixed point are techniques for closing open systems similar to "preparation" followed by "observation" of a physical system. Pragmatics is introduced as a framework for extending logical models with a fixed syntax and semantics to multiple-interface models that support collaboration among clients sharing common resources. (shrink)
Introduction -- Sanctioning models : theories and their scope -- Methodology for a virtual world -- A tale of two methods -- When theories shake hands -- Models of climate : values and uncertainties -- Reliability without truth -- Conclusion.
Resolving conflicts between different measurements ofa property of a physical system may be a key step in a discoveryprocess. With the emergence of large-scale databases and knowledgebases with property measurements, computer support for the task ofconflict resolution has become highly desirable. We will describe amethod for model-based conflict resolution and the accompanyingcomputer tool KIMA, which have been applied in a case-study inmaterials science. In order to be a useful aid to scientists, the toolneeds to be integrated with other (...) tools in a computer-supporteddiscovery environment. We will give an outline of such acomputer-supported discovery environment and argue that its use mightlead to new ways of doing science, so-called computer regimes. (shrink)
What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computerscience and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For (...) instance, should the mind be modeled by digital computers, or by parallel-processing systems more like brains? Do computer programs consist of meaningless patterns, or do they embody (and explain) genuine meaning? (shrink)
Robertson's earlier work, The New Renaissance projected the likely future impact of computers in changing our culture. Phase Change builds on and deepens his assessment of the role of the computer as a tool driving profound change by examining the role of computers in changing the face of the sciences and mathematics. He shows that paradigm shifts in understanding in science have generally been triggered by the availability of new tools, allowing the investigator a new way of seeing (...) into questions that had not earlier been amenable to scientific probing. (shrink)
This paper describes the author’s experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.
Examines some of the potential and some of the problems inherent in using computerized simulations in science and science studies classes by applying lessons from the epistemology of science. While computer simulations are useful pedagogical tools, they are not experiments and thus are of only limited utility as substitutes for actual laboratories. Contains 20 references. (Author/PVD).
Shanks and St. John (1994a) suggest that From the viewpoint of a computer scientist who tries to construct learning systems, that claim seems rather implausible. In this commentary I wish to suggest why, in the hopes of shedding light on the relationship between consciousness and learning.
There are a variety of topics in the philosophy of science that need to be rethought, in varying degrees, after one pays careful attention to the ways in which computer simulations are used in the sciences. There are a number of conceptual issues internal to the practice of computer simulation that can benefit from the attention of philosophers. This essay surveys some of the recent literature on simulation from the perspective of the philosophy of science and (...) argues that philosophers have a lot to learn by paying closer attention to the practice of simulation. (shrink)
Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
One of the most important contributions of A. Church to logic is his invention of the lambda calculus. We present the genesis of this theory and its two major areas of application: the representation of computations and the resulting functional programming languages on the one hand and the representation of reasoning and the resulting systems of computer mathematics on the other hand.
Computer simulation and philosophy of science Content Type Journal Article Pages 1-4 DOI 10.1007/s11016-011-9567-8 Authors Wendy S. Parker, Department of Philosophy, Ellis Hall 202, Ohio University, Athens, OH 45701, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Contrary to common views that philosophy is extraneous to cognitive science, this paper argues that philosophy has a crucial role to play in cognitive science with respect to generality and normativity. General questions include the nature of theories and explanations, the role of computer simulation in cognitive theorizing, and the relations among the different ﬁelds of cognitive science. Normative questions include whether human thinking should be Bayesian, whether decision making should maximize expected utility, and how norms (...) should be established. These kinds of general and normative questions make philosophical reﬂection an important part of progress in cognitive science. Philosophy operates best, however, not with a priori reasoning or conceptual analysis, but rather with empirically informed reﬂection on a wide range of ﬁndings in cognitive science. (shrink)