In this paper I attempt to cast the current program verification debate within a more general perspective on the methodologies and goals of computer science. I show, first, how any method involved in demonstrating the correctness of a physically executing computerprogram, whether by testing or formal verification, involves reasoning that is defeasible in nature. Then, through a delineation of the senses in which programs can be run as tests, I show that the activities of testing (...) and formal verification do not necessarily share the same goals and thus do not always constitute alternatives. The testing of a program is not always intended to demonstrate a program's correctness. Testing may seek to accept or reject nonprograms including algorithms, specifications, and hypotheses regarding phenomena. The relationship between these kinds of testing and formal verification is couched in a more fundamental relationship between two views of computer science, one properly containing the other. (shrink)
ABSTRACTIn this article, I develop three conceptual innovations within the area of formal metatheory, and present a computerprogram, called Reconstructor, that implements those developments. The first development consists in a methodology for testing formal reconstructions of scientific theories, which involves checking both whether translations of paradigmatically successful applications into models satisfy the formalisation of the laws, and also whether unsuccessful applications do not. I show how Reconstructor can help carry this out, since it allows the end-user to (...) specify a formal language, input axioms and models formulated in that language, and then ask if the models satisfy the axioms. The second innovation is the introduction of incomplete models into scientific metatheory, in order to represent cases of missing information. I specify the paracomplete semantics built into Reconstructor to deal with sentences where denotation failures occur. The third development consists in a new way of explicating the structuralist notion of a determination method, by equating them with algorithms. This allows determination methods to be loaded into Reconstructor and then executed within a model to find out the value of a previously non-denoting term. This, in turn, can help test the reconstruction in a different way. Finally, I conclude with some suggestions about additional uses the program may have. (shrink)
To effectively train ethical decision-making of nursing students, a case-based computerprogram was developed using Flash animation. Seven ethical cases collected from practicing registered nurses’ actual clinical experiences and a six-step Integrated Ethical Decision-Making Model developed by the author were employed in the program. In total, 251 undergraduate students from three nursing schools used the program in their nursing ethics course. The usability of the program and its usefulness in improving 11 abilities needed in ethical (...) decision-making were measured; it scored higher than 4 on a 5-point scale. Of the students, 82% recommended the program as a valuable complementary tool in the teaching of a nursing ethics course. A variety of encouraging and positive experiences were reported by the students. The computerprogram is likely to be usefully practical in the training of abstract skills to nursing students, though certain challenges remain, such as the precise understanding of cognitive or affective responses to ethical issues. (shrink)
Over recent decades there has been a growing interest in the question of whether computer programs are capable of genuinely creative activity. Although this notion can be explored as a purely philosophical debate, an alternative perspective is to consider what aspects of the behaviour of a program might be noted or measured in order to arrive at an empirically supported judgement that creativity has occurred. We sketch out, in general abstract terms, what goes on when a potentially creative (...)program is constructed and run, and list some of the relationships (for example, between input and output) which might contribute to a decision about creativity. Specifically, we list a number of criteria which might indicate interesting properties of a program’s behaviour, from the perspective of possible creativity. We go on to review some ways in which these criteria have been applied to actual implementations, and some possible improvements to this way of assessing creativity. (shrink)
Argument-mapping software abounds, and one of the reasons is that using the software has been shown to teach/promote/improve critical-thinking skills. These positive results are very encouraging, but they also raise the question of whether the computer tutorial environment is producing these results, or whether learning argument mapping, even with just paper and pencil, is sufficient. Based on the results of two empirical studies, I argue that the basic skill of being able to represent an argument diagrammatically plays an important (...) role in the improvement of critical-thinking skills. While these studies do not offer a direct comparison between the two methods, it is important for anyone wishing to employ argument mapping in the classroom to know that significant results can be obtained even with the most rudimentary of tools. (shrink)
A proof of ‘correctness’ for a mathematical algorithm cannot be relevant to executions of a program based on that algorithm because both the algorithm and the proof are based on assumptions that do not hold for computations carried out by real-world computers. Thus, proving the ‘correctness’ of an algorithm cannot establish the trustworthiness of programs based on that algorithm. Despite the (deceptive) sameness of the notations used to represent them, the transformation of an algorithm into an executable program (...) is a wrenching metamorphosis that changes a mathematical abstraction into a prescription for concrete actions to be taken by real computers. Therefore, it is verification of program executions (processes) that is needed, not of program texts that are merely the scripts for those processes. In this view, verification is the empirical investigation of: (a) the behavior that programs invoke in a computer system and (b) the larger context in which that behavior occurs. Here, deduction can play no more, and no less, a role than it does in the empirical sciences. (shrink)
This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over thousand (...) articles published in leading journals and conference proceedings in the C&IE field was constructed. With the help of various computer algorithms, key terms were identified in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the co-occurrence frequencies, the term map was constructed. This was done using a computerprogram called VOSviewer. The term map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three main concepts, namely privacy, ethics, and the Internet. (shrink)
A study is described in which the effectiveness of a computerprogram (Hermes) on improving argumentative writing is tested. One group of students was randomly assigned to a control group and the other was assigned to the experimental group where they are asked to use the Hermes program. All students were asked to write essays on controversial topics to an opposed audience. Their essays were content-analysed for dialectical traits. Based on this analysis, it was concluded that the (...) experimental group wrote more dialectically effective essays than the control group, and the amount of difference between the control and experimental groups was related to the students' intellectual developmental level, as assessed by the Measure of Epistemological Reflection (MER). It is concluded that argumentative writing, operationalized here as dialectical writing, can be improved by computer-assisted instruction, but that attempts to teach such forms of thinking and writing need to take into account students' capacity to benefit from such instruction. Such capacity is defined here as intellectual development. (shrink)
The paper is devoted to the discussion on ontological status of the computer programs. The most popular conceptions are presented and critically discussed: programs as concrete abstractions, as quasi-particular objects, as mathematical objects, and finally – program as digital pattern. Advantages and disadvantages of those approaches are pointed out and some possible solutions are proposed.
We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed deductively or empirically? Are computer programs on a par with mathematical objects, with mere data, or with mental processes? We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within the discipline: (...) – The rationalist paradigm, which was common among theoretical computer scientists, defines computer science as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their ‘correctness’ by means of deductive reasoning. – The technocratic paradigm, promulgated mainly by software engineers and has come to dominate much of the discipline, defines computer science as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites. – The scientific paradigm, prevalent in the branches of artificial intelligence, defines computer science as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation. We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the claim that program-processes are on a par with mental processes. We conclude with a discussion in the influence that the technocratic paradigm has been having over computer science. (shrink)
In the technical literature of computer science, the concept of an effective procedure is closely associated with the notion of an instruction that precisely specifies an action. Turing machine instructions are held up as providing paragons of instructions that "precisely describe" or "well define" the actions they prescribe. Numerical algorithms and computer programs are judged effective just insofar as they are thought to be translatable into Turing machine programs. Nontechnical procedures (e.g., recipes, methods) are summarily dismissed as ineffective (...) on the grounds that their instructions lack the requisite precision. But despite the pivotal role played by the notion of a precisely specified instruction in classifying procedures as effective and ineffective, little attention has been paid to the manner in which instructions "precisely specify" the actions they prescribe. It is the purpose of this paper to remedy this defect. The results are startling. The reputed exemplary precision of Turing machine instructions turns out to be a myth. Indeed, the most precise specifications of action are provided not by the procedures of theoretical computer science and mathematics (algorithms) but rather by the nontechnical procedures of everyday life. I close with a discussion of some of the rumifications of these conclusions for understanding and designing concrete computers and their programming languages. (shrink)
This paper describes the major components of ImpactCS, a program to develop strategies and curriculum materials for integrating social and ethical considerations into the computer science curriculum. It presents, in particular, the content recommendations of a subcommittee of ImpactCS; and it illustrates the interdisciplinary nature of the field, drawing upon concepts from computer science, sociology, philosophy, psychology, history and economics.
The claim has often been made that passing the Turing Test would not be sufficient to prove that a computerprogram was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of (...) tricks. (2) That the HT program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the Turing Test, if we exclude arguments that cognitive science is simply not possible. (shrink)
Can one be fooled into believing that one intended an action that one in fact did not intend? Past experimental paradigms have demonstrated that participants, when provided with false perceptual feedback about their actions, can be fooled into misperceiving the nature of their intended motor act. However, because veridical proprioceptive/perceptual feedback limits the extent to which participants can be fooled, few studies have been able to answer our question and induce the illusion to intend. In a novel paradigm addressing this (...) question, participants were instructed to move a line on the computer screen by use of a phony brain–computer interface. Line movements were actually controlled by computerprogram. Demonstrating the illusion to intend, participants reported more intentions to move the line when it moved frequently than when it moved infrequently. Consistent with ideomotor theory, the finding illuminates the intimate liaisons among ideomotor processing, the sense of agency, and action production. (shrink)
There are theoretical limitations to what can be implemented by a computerprogram. In this paper we are concerned with a limitation on the strength of computer implemented deduction. We use a version of the Curry paradox to arrive at this limitation.
We have developed a formal model of certain types of riddles, and implemented it in a computerprogram, JAPE, which generates simple punning riddles. In order to test the model, we evaluated the behaviour of the program, by having 120 children aged eight to eleven years old rate JAPE-generated texts, human-generated texts, and non-joke texts for "jokiness" and funniness. This confirmed that JAPE's output texts are indeed jokes, and that there is no significant difference in funniness or (...) jokiness between JAPE"s most comprehensible texts and published human-generated jokes. (shrink)
The purpose of this paper is to provide an account of the epistemology and metaphysics of universe creation on a computer. The paper begins with F.J.Tipler's argument that our experience is indistinguishable from the experience of someone embedded in a perfect computer simulation of our own universe, hence we cannot know whether or not we are part of such a computerprogram ourselves. Tipler's argument is treated as a special case of epistemological scepticism, in a similar (...) vein to `brain-in-a-vat' arguments. It is argued that the hypothesis that our universe is a program running on a digital computer in another universe generates empirical predictions, and is therefore a falsifiable hypothesis. The computerprogram hypothesis is also treated as a hypothesis about what exists beyond the physical world, and is compared with Kant's metaphysics of noumena. It is proposed that a theory about what exists beyond the physical world should be formulated with the precise concepts of mathematics, and should generate physical predictions. It is argued that if our universe is a program running on a digital computer, then our universe must have compact spatial topology, and the possibilities of observationally testing this prediction are considered. The possibility of testing the computerprogram hypothesis with the value of the density parameter Omega_0 is also analysed. The informational requirements for a computer to represent a universeexactly and completely are considered. Consequent doubt is thrown upon Tipler's claim that if a hierarchy of computer universes exists, we would not be able to know which `level of implementation' our universe exists at. It is then argued that a digital computer simulation of a universe cannot exist as a universe. However, the paper concludes with the acknowledgement that an analog computer simulation can be objectively related to the thing it represents, hence an analog computer simulation of a universe could, in principle, exist as a universe. (shrink)
Background: computer software is widely used to support literacy learning. There are few randomised trials to support its effectiveness. Therefore, there is an urgent need to rigorously evaluate computer software that supports literacy learning.Methods: we undertook a pragmatic randomised controlled trial among pupils aged 11–12 within a single state comprehensive school in the North of England. The pupils were randomised to receive 10 hours of literacy learning delivered via laptop computers or to act as controls. Both groups received (...) normal literacy learning. A pre‐test and two post‐tests were given in spelling and literacy. The main pre‐defined outcome was improvements in spelling scores.Results: 155 pupils were randomly allocated, 77 to the ICT group and 78 to control. Four pupils left the school before post‐testing and 25 pupils did not have both pre‐ and post‐test data. Therefore, 63 and 67 pupils were included in the main analysis for the ICT and control groups respectively. After adjusting for pre‐test scores there was a slight increase in spelling scores, associated with the ICT intervention, but this was not statistically significant – 1.83 to 3.74, p = 0.50). For reading scores there was a statistically significant decrease associated with the ICT intervention .Conclusions: we found no evidence of a statistically significant benefit on spelling outcomes using a computerprogram for literacy learning. For reading there seemed to be a reduction in reading scores associated with the use of the program. All new literacy software needs to be tested in a rigorous trial before it is used routinely in schools. (shrink)
Drawing substantive conclusions from linear causal models that perform acceptably on statistical tests is unreasonable if it is not known how alternatives fare on these same tests. We describe a computerprogram, TETRAD, that helps to search rapidly for plausible alternatives to a given causal structure. The program is based on principles from statistics, graph theory, philosophy of science, and artificial intelligence. We describe these principles, discuss how TETRAD employs them, and argue that these principles make TETRAD (...) an effective tool. Finally, we illustrate TETRAD's effectiveness by applying it to a multiple indicator model of Political and Industrial development. A pilot version of the TETRAD program is described in this paper. The current version is described in our forthcoming Discovering Causal Structure: Artificial Intelligence for Statistical Modeling. (shrink)
This paper advocates the importance of an ethical choice in the design of a given technology. As—among various possible examples—the history of the Internet shows, the intersection between trust, law, and technology can become either an empowering factor for business and individuals or a tool for infringing human rights. It is of utmost importance not to lose focus on the fact that every technology is a human byproduct, and that when a technology fails, it is mainly a human fault.