The philosophy of computer science is concerned with issues that arise from reflection upon the nature and practice of the discipline of computer science. This book presents an approach to the subject that is centered upon the notion of computational artefact. It provides an analysis of the things of computer science as technical artefacts. Seeing them in this way enables the application of the analytical tools and concepts from the philosophy of technology to the technical artefacts of computer science. With (...) this conceptual framework the author examines some of the central philosophical concerns of computer science including the foundations of semantics, the logical role of specification, the nature of correctness, computational ontology and abstraction, formal methods, computational epistemology and explanation, the methodology of computer science, and the nature of computation. The book will be of value to philosophers and computer scientists. (shrink)
The specification and implementation of computational artefacts occurs throughout the discipline of computer science. Consequently, unpacking its nature should constitute one of the core areas of the philosophy of computer science. This paper presents a conceptual analysis of the central role of specification in the discipline.
Raymond Turner first provides a logical framework for specification and the design of specification languages, then uses this framework to introduce and study ...
Taken at face value, a programming language is defined by a formal grammar. But, clearly, there is more to it. By themselves, the naked strings of the language do not determine when a program is correct relative to some specification. For this, the constructs of the language must be given some semantic content. Moreover, to be employed to generate physical computations, a programming language must have a physical implementation. How are we to conceptualize this complex package? Ontologically, what kind of (...) thing is it? In this paper, we shall argue that an appropriate conceptualization is furnished by the notion of a technical artifact. (shrink)
In Logics for Artificial Intelligence, Raymond Turner leads us on a whirl-wind tour of nonstandard logics and their general applications to Al and computer science.
We may wonder about the status of logical accounts of the meaning of language. When does a particular proposal count as a theory? How do we judge a theory to be correct? What criteria can we use to decide whether one theory is âbetterâ than another? Implicitly, many accounts attribute a foundational status to set theory, and set-theoretic characterisations of possible worlds in particular. The goal of a semantic theory is then to find a translation of the phenomena of interest (...) into a set-theoretic model. Such theories may be deemed to have âexplanatoryâ or âpredictiveâ power if a mapping can found into expressions of set-theory that have the appropriate behaviour by virtue of the rules of set-theory (for example Montague 1973; Montague1974). This can be contrasted with an approach in which we can help ourselves to ânewâ primitives and ontological categories, and devise logical rules and axioms that capture the appropriate inferential behaviour (as in Turner 1992). In general, this alternative approach can be criticised as being mere âdescriptivismâ, lacking predictive or explanatory power. Here we will seek to defend the axiomatic approach. Any formal account must assume some normative interpretation, but there is a sense in which such theories can provide a more honest characterisation (cf. Dummett 199). In contrast, the set-theoretic approach tends to conflate distinct ontological notions. Mapping a pattern of semantic behaviour into some pre-existing set-theoretic behaviour may lead to certain aspects of that behaviour being overlooked, or ignored (Chierchia & Turner 1988; Bealer 1982). Arguments about the explanatory and predictive power of set-theoretic interpretations can also be questioned (see Benacerraf 1965, for example). We aim to provide alternative notions for evaluating the quality of a formalisation, and the role of formal theory. Ultimately, claims about the methodological and conceptual inadequacies of axiomatic accounts compared to set-theoretic reductions must rely on criteria and assumptions that lie outside the domain of formal semantics as such. (shrink)
The reviewers Rapaport, Stephanou, Angius, Primiero, and Bringsjord of Turner cover a broad range of topics in the philosophy of computer science. They either challenge the positions outlined in Turner or offer a more refined analysis. This article is a response to their challenges.
The reviewers Rapaport, Stephanou, Angius, Primiero, and Bringsjord of Turner cover a broad range of topics in the philosophy of computer science. They either challenge the positions outlined in Turner or offer a more refined analysis. This article is a response to their challenges.
The core entities of computer science include formal languages, spec-ifications, models, programs, implementations, semantic theories, type inference systems, abstract and physical machines. While there are conceptual questions concerning their nature, and in particular ontological ones, our main focus here will be on the relationships between them. These relationships have an extensional aspect that articulates the propositional connection between the two entities, and an intentional one that fixes the direction of governance. An analysis of these two aspects will drive our investigation; (...) an investigation that will touch upon some of the central concerns of the philosophy of computer science. (shrink)
By the term nominalization I mean any process which transforms a predicate or predicate phrase into a noun or noun phrase, e.g. feminine is transformed into feminity. I call these derivative nouns abstract singular terms. Our aim is to provide a model-theoretic interpretation for a formal language which admits the occurrence of such abstract singular terms.