Taken at face value, a programming language is defined by a formal grammar. But, clearly, there is more to it. By themselves, the naked strings of the language do not determine when a program is correct relative to some specification. For this, the constructs of the language must be given some semantic content. Moreover, to be employed to generate physical computations, a programming language must have a physical implementation. How are we to conceptualize this complex package? Ontologically, what (...) kind of thing is it? In this paper, we shall argue that an appropriate conceptualization is furnished by the notion of a technical artifact. (shrink)
An emerging standard for polymorphically typed, lazy, purely functionalprogramming is Haskell, a language named after Haskell Curry. Haskell is based on (polymorphically typed) lambda calculus, which makes it an excellent tool for computational semantics.
In addition, this book contains tools that, in principle, can search a set of algorithms to see whether a problem is solvable, or more specifically, if it can be solved by an algorithm whose computations are efficient.
Recently, there have been some attempts towards developing programminglanguages based on situation theory. These languages employ situation-theoretic constructs with varying degrees of divergence from the ontology of the theory. In this paper, we review three of these programminglanguages.
Mathematical models are an important tool in the development ofsoftware technology, including programminglanguages and algorithms.During the last few years, a new class of such models has beendeveloped based on the notion of a mathematical game that isespecially well-suited to address the interactions between thecomponents of a system. This paper gives an introduction to thesegame-semantical models of programminglanguages, concentrating onmotivating the basic intuitions and putting them into context.
We build on an existing a term-sequent logic for the λ-calculus. We formulate a general sequent system that fully integrates αβη-reductions between untyped λ-terms into first order logic. We prove a cut-elimination result and then offer an application of cut-elimination by giving a notion of uniform proof for λ-terms. We suggest how this allows us to view the calculus of untyped αβ-reductions as a logic programming language (as well as a functionalprogramming language, as it is traditionally (...) seen). (shrink)
Almost forty years ago Richard Montague proposed to analyse natural language with the same tools as formal languages. In particular, he gave formal semantic analyses of several interesting fragments of English in terms of typed logic. This led to the development of Montague grammar as a particular style of formal analysis of natural language.
The distinction between the modeling of information and the modeling of data in the creation of automated systems has historically been important because the development tools available to programmers have been wedded to machine oriented data types and processes. However, advances in software engineering, particularly the move toward data abstraction in software design, allow activities reasonably described as information modeling to be performed in the software creation process. An examination of the evolution of programminglanguages and development of (...) general programming paradigms, including object-oriented design and implementation, suggests that while data modeling will necessarily continue to be a programmer's concern, more and more of the programming process itself is coming to be characterized by information modeling activities. (shrink)
Dynamic epistemic logic is the logic of the eﬀects of epistemic actions like making public announcements, passing private messages, revealing secrets, telling lies. This paper takes its starting point from the version of dynamic epistemic logic of , and demonstrates a tool that can be used for showing what goes on during a series of epistemic updates: the dynamic epistemic modelling tool DEMO [7, 9]. DEMO allows modelling epistemic updates, graphical display of update results, graphical display of action models, formula (...) evaluation in epistemic models, and translation of dynamic epistemic formulas to PDL  formulas. DEMO is written in Haskell. This paper intends to demonstrate its usefulness for visualizing the model transformations that take place during epistemic updating. (shrink)
This book describes the mathematical aspects of the semantics of programminglanguages. The main goals are to provide formal tools to assess the meaning of programming constructs in both a language-independent and a machine-independent way, and to prove properties about programs, such as whether they terminate, or whether their result is a solution of the problem they are supposed to solve. In order to achieve this the authors first present, in an elementary and unified way, the theory (...) of certain topological spaces that have proved of use in the modelling of various families of typed lambda calculi considered as core programminglanguages and as meta-languages for denotational semantics. This theory is now known as Domain Theory, and was founded as a subject by Scott and Plotkin. One of the main concerns is to establish links between mathematical structures and more syntactic approaches to semantics, often referred to as operational semantics, which is also described. This dual approach has the double advantage of motivating computer scientists to do some mathematics and of interesting mathematicians in unfamiliar application areas from computer science. (shrink)
Epistemic logic is the logic of knowledge, and dynamic epistemic logic is the logic of effects of communicative actions on the knowledge states of a set of agents. Typical communicative actions are making public announcements, passing private messages, revealing secrets, telling lies. This paper takes its starting point from the version of dynamic epistemic logic of , and demonstrates a tool that can be used for showing what goes on during a series of epistemic updates: the dynamic epistemic modelling tool (...) DEMO . DEMO allows modelling epistemic updates, graphical display of update results, graphical display of action models, formula evaluation in epistemic models, and translation of dynamic epistemic formulas to PDL  formulas. DEMO is written in Haskell. This paper intends to demonstrate its use for calculating and visualizing the model transformations that take place during epistemic updating. (shrink)
Two areas of importance for agents and multiagent systems are investigated: design of agent programminglanguages, and design of agent communication languages. The paper contributes in the above mentioned areas by demonstrating improved or novel applications for deontic logic and normative reasoning. Examples are taken from computer-supported cooperative work, and electronic commerce.
All sciences have epistemic assumptions, a language for expressing their theories or models, and symbols that reference observables that can be measured. In most sciences the language in which their models are expressed are not the focus of their attention, although the choice of language is often crucial for the model. On the contrary, biosemiotics, by definition, cannot escape focusing on the symbol–matter relationship. Symbol systems first controlled material construction at the origin of life. At this molecular level it is (...) only in the context of open-ended evolvability that symbol–matter systems and their functions can be objectively defined. Symbols are energy-degenerate structures not determined by laws that act locally as special boundary conditions or constraints on law-based energy-dependent matter in living systems. While this partial description holds for all symbol systems, cultural languages are much too complex to be adequately described only at the molecular level. Genetic language and cultural languages have common basic requirements, but there are many significant differences in their structures and functions. (shrink)
We compare Fresco’s analysis of the Turing machine-based notion of computation with that of others, in particular with functionalprogramming and with the reversible computing paradigm of Toffoli and others. We conclude that, although much useful philosophical work can be done by the sort of analysis that Fresco proposes, there is, nevertheless, always likely to be a number of individually viable but different accounts of computation.
A free logic is one in which a singular term can fail to refer to an existent object, for example, `Vulcan' or `5/0'. This essay demonstrates the fruitfulness of a version of this non-classical logic of terms (negative free logic) by showing (1) how it can be used not only to repair a looming inconsistency in Quine's theory of predication, the most influential semantical theory in contemporary philosophical logic, but also (2) how Beeson, Farmer and Feferman, among others, use it (...) to provide a natural foundation for partial functions in programminglanguages. Vis à vis (2), the question is raised whether the Beeson-Farmer-Feferman approach is adequate to the treatment of partial functions in all programminglanguages. Gumb and the author say No, and suggest a way of handling the refractory cases by means of positive free logic. Finally, Antonelli's solution of a problem associated with the Gumb-Lambert proposal is mentioned. (shrink)
How to write a program in Haskell, and how to use the Haskell testing tools . . . QuickCheck is a tool written in the functionalprogramming language Haskell that allows testing of specifications by means of randomly generated tests. QuickCheck is part of the standard Haskell library. Re-implementations of QuickCheck exist for many languages, including Ruby and Scheme. SmallCheck is a similar tool, different from QuickCheck in that it tests properties for all finitely many values of (...) a datatype up to some given depth, with progressive increase of depth. Haskell is a research language: many of the testing tools that were first developed for Haskell later find their way to other languages. These slides discuss QuickCheck (two versions), SmallCheck, and some work in progress. We end with some examples of Alloy specifications. (shrink)
This paper provides an explication and defense of a view that many philosophers and biologists have accepted though few have understood, the idea that functional language can play an important role in biological discovery. I defend four theses in support of this view: (1) functional statements can serve as background assumptions that produce research problems; (2) functional questions can be important parts of research problems; (3) functional concepts can provide a framework for developing general theories; (4) (...)functional statements can serve as heuristics for generating hypotheses. I develop and defend these four claims by describing a taxonomy of functional discourse, providing an account of scientific discovery, and by applying this framework to some cases of successful research in biology. (shrink)
Cerebral language lateralization can be assessed in several ways. In healthy subjects, functional MRI (fMRI) during performance of a language task has evolved to be the most frequently applied method. Functional Transcranial Doppler (fTCD) may provide a valid alternative, but has been used rarely. Both techniques have their own strengths and weaknesses and as a result may be applied in different fields of research. Until now, only one relatively small study (n=13) investigated the correlation between lateralization indices measured (...) by fTCD and fMRI and showed a remarkably high correlation. To further evaluate the correlation between lateralization indices measured with fTCD and fMRI, we compared lateralization indices of twenty-two healthy subjects (twelve left- and ten right-handed) using the same word generation paradigm for the fTCD as for the fMRI experiment. Lateralization indices measured with fTCD were highly but imperfectly correlated with lateralization indices measured with fMRI (Spearman’s rho=0.75, p<0.001). The imperfectness of the correlation can partially be explained by methodological restrictions of fMRI as well as fTCD. Our results suggest that fTCD can be a valid alternative for fMRI to measure lateralization, particularly when costs or mobility are important factors in the study design. (shrink)
1. Logic programming did not seize the attention of most programmers until the Japanese announced that they had chosen Prolog for their ambitious Fifth Generation Computer Systems project. While that project appeàrs now to be hampered by bureaucratic difficulties, the interest it aroused in Prolog lives on. Part of the attraction of Prolog stems from the fact that the beginner will very quickly be able to write toy programs, even spectacular ones. Difficulties in creating larger programs, however, seem to (...) bring back Prolog to the level of other programminglanguages. Such difficulties arise from numerous defects of Prolog, some of which are purely logicai in nature. Among the latter at least two should be mentioned: (a) the peculiar meaning of negation; (b) the fact that reduction to clausal form is not part of the language. As to (a), strictly speaking Prolog has no negation. Its notion ot.negation- as-failure - by which -i <p is inferred from fatture to infer y - is a tricky one. For instance, suppose that the goal likes (John, X ) succeeds with X instantiated to mary. Then not (likes (John, X )) fails, so X becomes uninstantiated and hence has no value. However not (not (likes (John, X ))) succeeds with X instantiated to mary. This makes the meaning of negation almost incomprehensible. As to (b), for efficiency Prolog uses the programmer, as it were, as a preprocessor for reduction to clausal form: with the gain in efficiency that one can very well imagine. Of course reduction to clausal form can be implemented in Prolog and bùilt up within every Prolog program together with a suitable user interface, but this is very much like designing a new programming language. (shrink)
The construction of complex simulation models and the application of new computer hardware to ecological problems has resulted in the need for many ecologists to rely on computer programmers to develop their modelling software. However, this can lead to a lack of flexibility and understanding in model implementation and in resource problems for researchers. This paper presents a new programming language, Viola, based on a simple organisational concept which can be used by most researchers to develop complex simulations much (...) more easily than could be achieved with standard programminglanguages such as C++. The language is object oriented and implemented through a visual interface. It is specifically designed to cope with complicated individual based behavioural simulations and comes with embedded concurrency handling abilities. (shrink)
One of the most important contributions of A. Church to logic is his invention of the lambda calculus. We present the genesis of this theory and its two major areas of application: the representation of computations and the resulting functionalprogramminglanguages on the one hand and the representation of reasoning and the resulting systems of computer mathematics on the other hand.
We provide a full characterization of computational error states for information systems. The class of errors considered is general enough to include human rational processes, logical reasoning, scientific progress and data processing in some functionalprogramminglanguages. The aim is to reach a full taxonomy of error states by analysing the recovery and processing of data. We conclude by presenting machine-readable checking and resolve algorithms.
, which uses the intuitionistic propositional calculus, with the only connective →. It is very important, because the well known Curry-Howard correspondence between proofs and programs was originally discovered with it, and because it enjoys the normalization property: every typed term is strongly normalizable. It was extended to second order intuitionistic logic, in 1970, by J.-Y. Girard , under the name of system F, still with the normalization property.More recently, in 1990, the Curry-Howard correspondence was extended to classical logic, following (...) Felleisen and Griffin  who discovered that the law of Peirce corresponds to control instructions in functionalprogramminglanguages. It is interesting to notice that, as early as 1972, Clint and Hoare  had made an analogous remark for the law of excluded middle and controlled jump instructions in imperative languages.There are now many type systems which are based on classical logic; among the best known are the system LC of J.-Y. Girard  and the λμ-calculus of M. Parigot . We shall use below a system closely related to the latter, called the λ c -calculus [8, 9]. Both systems use classical second order logic and have the normalization property.In the sequel, we shall extend the λ c -calculus to the Zermelo-Frænkel set theory. The main problem is due to the axiom of extensionality. To overcome this difficulty, we first give the axioms of ZF in a suitable (equivalent) form, which we call ZF ɛ. (shrink)
In this tutorial, the meaning of natural language is analysed along the lines proposed by Gottlob Frege and Richard Montague. In building meaning representations, we assume that the meaning of a complex expression derives from the meanings of its components. Typed logic is a convenient tool to make this process of composition explicit. Typed logic allows for the building of semantic representations for formal languages and fragments of natural language in a compositional way. The tutorial ends with the discussion (...) of an example fragment, implemented in the functionalprogramming language Haskell Haskell Team; Jones.. (shrink)
Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis- tics, and programminglanguages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record- ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas (...) in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor- age and reactivation of perceptual symbols operates at the level of perceptual components – not at the level of holistic perceptual expe- riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com- mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec- tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab- stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato- rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per- ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym- bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored. (shrink)
When John von Neumann turned his interest to computers, he was one of the leading mathematicians of his time. In the 1940s, he helped design two of the ﬁrst stored-program digital electronic computers. He authored reports explaining the functional organization of modern computers for the ﬁrst time, thereby inﬂuencing their construction worldwide (von Neumann, 1945; Burks et al., 1946). In the ﬁrst of these reports, von Neumann described the computer as analogous to a brain, with an input “organ” (analogous (...) to sensory neurons), a memory, an arithmetical and a logical “organ” (analogous to associative neurons), and an output “organ” (analogous to motor neurons). His experience with computers convinced him that brains and computers, both having to do with the processing of information, should be studied by a new discipline–automata theory. In fact, according to von Neumann, automata theory would cover not only computers and brains, but also any biological or artiﬁcial systems that dealt with information and control, including robots and genes. Von Neumann never formulated a full-blown mathematical theory of automata, but he wrote several important exploratory papers (von Neumann, 1951, 1956, 1966). Meanwhile, besides designing hardware, he developed some of the ﬁrst programs, programminglanguages, programming techniques, and numerical methods for solving mathematical problems using computers. (Much of his work on computing is reprinted in Aspray and Burks, 1987.) Shortly before his death in 1956, he wrote an informal synthesis of his views about brains. Though von Neumann left his manuscript sketchy and unﬁnished, Yale University Press published it as The Com- puter and the Brain in 1958. The 2000 reprint of this small but informative book is an opportunity to learn, or be reminded of, von Neumann’s thoughts on the computational organization of the mind-brain. Von Neumann began by explaining computers, which for him were essentially number crunchers: to compute was “to operate on .. (shrink)
This unique book presents a comprehensive and rigorous treatment of the theory of computability which is introductory yet self-contained. It takes a novel approach by looking at the subject using computation models rather than a limitation orientation, and is the first book of its kind to include software. Accompanying software simulations of almost all computational models are available for use in conjunction with the text, and numerous examples are provided on disk in a user-friendly format. Its applications to computer science (...) itself include interesting links to programming language theory, compiler design theory, and algorithm design. The software, numerous examples, and solutions make this book ideal for self-study by computer scientists and mathematicians alike. (shrink)
Criminal intelligence data poses problems for conventional database technology. It has little structure or homogeneity and queries may involve looking for unknown associations between entities; such open-ended queries cannot be made in current systems. Finally, the data must be presented in an intuitively simple fashion for both investigative and evidential purposes. We discuss a database system which uses a labelled graph as its data model. This approach obviates the need for schema design, allows queries which look for associations between entities (...) to be implemented and provides the basis for a natural visual representation of the data. (shrink)
Web legal information retrieval systems need the capability to reason with the knowledge modeled by legal ontologies. Using this knowledge it is possible to represent and to make inferences about the semantic content of legal documents. In this paper a methodology for applying NLP techniques to automatically create a legal ontology is proposed. The ontology is defined in the OWL semantic web language and it is used in a logic programming framework, EVOLP+ISCO, to allow users to query the semantic (...) content of the documents. ISCO allows an easy and efficient integration of declarative, object-oriented and constraint-based programming techniques with the capability to create connections with external databases. EVOLP is a dynamic logic programming framework allowing the definition of rules for actions and events. An application of the proposed methodology to the legal web information retrieval system of the Portuguese Attorney General’s Office is described. (shrink)
Elephant 2000 is a proposed programming language good for writing and verifying programs that interact with people (eg. transaction processing) or interact with programs belonging to other organizations (eg. electronic data interchange) 1. Communication inputs and outputs are in an I-O language whose sentences are meaningful speech acts identified in the language as questions, answers, offers, acceptances, declinations, requests, permissions and promises. 2. The correctness of programs is partly defined in terms of proper performance of the speech acts. Answers (...) should be truthful and responsive, and promises should be kept. Sentences of logic expressing these forms of correctness can be generated automatically from the form of the program. 3. Elephant aource programs may not need data structures, because they can refer directly to the past. Thus a program can say that an airline passenger has a reservation if he has made one and hasn't cancelled it. 4. Elephant programs themselves can be represented as sentences of logic. Their extensional properties follow from this representation without an intervening theory of programming or anything like Hoare axioms. 5. Elephant programs that interact non-trivially with the outside world can have both input-output specification, relating the programs inputs and outputs, and accomplishment specifications concerning what the program accomplishes in the world. These concepts are respectively generalizations of the philosophers' illocutionary and perlocutionary speech acts. 6. Programs that engage in commercial transactions assume obligations on behalf of their owners in exchange for obligations assumed by other entities. It may be part of the specification of an Elephant 2000 program that these obligations are exchanged as intended, and this too can be expressed by a logical sentence. 7. Human speech acts involve intelligence. Elephant 2000 is on the borderline of AI, but the article emphasizes the Elephant usages that do not require AI. (shrink)
The interaction between brain and language has been investigated by a vast amount of research and different approaches, which however do not offer a comprehensive and unified theoretical framework to analyze how brain functioning performs the mental processes we use in producing language and in understanding speech. This Special Issue addresses the need to develop such a general theoretical framework, by fostering an interaction among the various scientific disciplines and methodologies, which centres on investigating the functional architecture of brain, (...) mind and language, and is articulated along the following main dimensions of research: (a) Language as a regulatory contour of brain and mental processes; (b) Language as a unique human phenomenon; (c) Language as a governor of human behaviour and brain operations; (d) Language as an organizational factor of ontogenesis of mentation and behaviour. (shrink)
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of (...) peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that 1) facial expressions accompany sudden shifts in well-being as described in language; 2) facial expressions modulate emotion states during reading; and 3) emotion states prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. (shrink)
Anatomo-functional studies in humans point out that handedness and language-related functional laterality are not correlated – except during language production; and that the convergence of language and hand control is located in the precentral gyrus, whereas executive functions required by movement imitation and phonological and semantic processing converge onto Broca's area. Multiple domains are likely to be actors in language evolution. Footnotes1 Nathalie Tzourio-Mazoyer is the corresponding author for this commentary.
Several studies have suggested a bilingual advantage in executive functions, presumably due to bilinguals’ massive practice with language switching that requires executive resources, but the results are still somewhat controversial. Previous studies are also plagued by the inherent limitations of a natural groups design where the participant groups are bound to differ in many ways in addition to the variable used to classify them. In an attempt to introduce a complementary analysis approach, we employed multiple regression to study whether the (...) performance of 30-75-year-old Finnish-Swedish bilinguals (n= 38) on tasks measuring different executive functions (inhibition, updating, and set shifting) could be predicted by the frequency of language switches in everyday life (as measured by a language switching questionnaire), L2 age of acquisition, or by the self-estimated degree of use of both languages in everyday life. Most consistent effects were found for the set shifting task where a higher rate of everyday language switches was related to a smaller mixing cost in errors. Mixing cost is thought to reflect top-down management of competing task sets, thus resembling the bilingual situation where decisions of which language to use has to be made in each conversation. These findings provide additional support to the idea that some executive functions in bilinguals are affected by a lifelong experience in language switching and, perhaps even more importantly, suggest a complementary approach to the study of this issue. (shrink)
This paper adds temporal logic to public announcement logic (PAL) and dynamic epistemic logic (DEL). By adding a previous-time operator to PAL, we express in the language statements concerning the muddy children puzzle and sum and product. We also express a true statement that an agent’s beliefs about another agent’s knowledge flipped twice, and use a sound proof system to prove this statement. Adding a next-time operator to PAL, we provide formulas that express that belief revision does not take place (...) in PAL. We also discuss relationships between announcements and the new knowledge agents thus acquire; such relationships are related to learning and to Fitch’s paradox. We also show how inverse programs and hybrid logic each can be used to help determine whether or not an arbitrary structure represents the play of a game. We then add a past-time operator to DEL, and discuss the importance of adding yet another component to the language in order to prove completeness. (shrink)
The variety of semantical approaches that have been invented for logic programs is quite broad, drawing on classical and many-valued logic, lattice theory, game theory, and topology. One source of this richness is the inherent non-monotonicity of its negation, something that does not have close parallels with the machinery of other programming paradigms. Nonetheless, much of the work on logic programming semantics seems to exist side by side with similar work done for imperative and functionalprogramming, (...) with relatively minimal contact between communities. In this paper we summarize one variety of approaches to the semantics of logic programs: that based on ﬁxpoint theory. We do not attempt to cover much beyond this single area, which is already remarkably fruitful. We hope readers will see parallels with, and the divergences from the better known ﬁxpoint treatments developed for other programming methodologies. (shrink)