A substantial amount of recent work in natural language generation has focused on the generation of ‘‘one-shot’’ referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns (...) out that the success of the IA depends substantially on the ‘‘preference order’’ (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. (shrink)
In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order. The authors argue that there are potentially many different Preference Orders that could be considered, while often no evidence is available to determine which is a (...) good one. In this brief note, however, we suggest (based on a learning curve experiment) that finding a Preference Order for a new domain may not be so difficult after all, as long as one has access to a handful of human-produced descriptions collected in a semantically transparent way. We argue that this is due to the fact that it is both more important and less difficult to get a good ordering of the head than of the tail of a Preference Order. (shrink)
This paper shows that a relatively easy algorithm for computing the (unique) outcome of a sophisticated voting procedure called sequential voting by veto (SVV) applies to a more general situation than considered hitherto. According to this procedure a sequence of n voters must select s out of m + s options (s > 0, m 3 ⩾ n 3 ⩾ 2). The ith voter, when his turn comes, vetoes k i options (k i ⩾ 1, ∑ k i = (...) m). The s remaining non-vetoed options are selected. Every voter is assumed to be fully informed of all other voters total (linear) preference orderings among the competing options, as well as of the order in which the veto votes are cast. This algorithm was proposed by Mueller (1978) for the special case where s and the k i are all equal to 1, and extended by Moulin (1983) to the somewhat more general case where the k i are arbitrary but s is still 1. Some theoretical and practical issues of voting by veto are discussed. (shrink)
While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by (...) silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. (shrink)
Driven by concrete applications, Algorithm Engineering complements theory by the benefits of experimentation and puts equal emphasis on all aspects arising during a cyclic solution process ranging from realistic modeling, design, analysis, ...
The famous Allen's interval relations constraint propagation algorithm was intended for linear time. Its 13 primitive relations define all the possible mutual locations of two intervals on the time-axis. In this paper an application of the algorithm for non-linear time is suggested. First, a new primitive relation is added. It is called excludes since an occurrence of one event in a certain course of events excludes an occurrence of the other event in this course. Next, new composition rules (...) for relations between intervals are presented: some of the old rules are extended by the relation excludes, and entirely new ones are formulated for composing the relation excludes with the other relations. Four different composition tables are considered. The choice of a composition table depends on whether time is branching or not, and whether intervals can contain non-collinear subintervals or not. (shrink)
This response discusses the experiment reported in Krahmer et al.’s Letter to the Editor of Cognitive Science. We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from ‘‘normal” (i.e., non-semantically-balanced) corpora.
Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...) argument for the AI-thesis, and suggests making a distinction between a broad and a narrow notion of algorithm that might be used to reformulate the question as a foundational problem for argumentation theory. (shrink)
The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy (...) of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers “Can’t tell” for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted. (shrink)
if and only if for every W in V, W is independent of the set of all its non-descendants conditional on the set of its parents. One natural question that arises with respect to DAGs is when two DAGs are “statistically equivalent”. One interesting sense of “statistical equivalence” is “d-separation equivalence” (explained in more detail below.) In the case of DAGs, d-separation equivalence is also corresponds to a variety of other natural senses of statistical equivalence (such as representing the same (...) set of distributions). Theorems characterizing d-separation equivalence for directed acyclic graphs and that can be used as the basis for polynomial time algorithms for checking d-separation equivalence were provided by Verma and Pearl (1990), and Frydenberg (1990). The question we will examine is how to extend these results to cases where a DAG may have latent (unmeasured) variables or selection bias (i.e. some of the variables in the DAG have been conditioned on.) D-separation equivalence is of interest in part because there are algorithms for constructing DAGs with latent variables and selection bias that are based on observed conditional independence relations. For this class of algorithms, it is impossible to determine which of two d-separation equivalent causal structures generated a given probability distribution, given only the set of conditional independence and dependence relations true of the observed distribution. We will describe a polynomial (in the number of vertices) time algorithm for determining when two DAGs which may have latent variables or selection bias are d-separation equivalent. (shrink)
In a fundamental paper [Phys. Rev. Lett. 78, 325 (1997)] Grover showed how a quantum computer can …nd a single marked object in a database of size N by using only O(pN ) queries of the oracle that identi…es the object. His result was generalized to the case of …nding one object in a subset of marked elements. We consider the following computational problem: A subset of marked elements is given whose number of elements is either M or K, M (...) < K, our task is to determine which is the case. We show how to solve this problem with a high probability of success using only iterations of Grover’s basic step (and no other algorithm). Let m be the required number of iterations; we prove that under certain restrictions on the sizes of M and K the estimation.. (shrink)
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding false positives and (...) false negatives. This preference ultimately depends on a number of value-judgments. In the last section of the paper we discuss some general principles for dealing with ethical issues in algorithm-design. (shrink)
We demonstrate a method for optimizing desired functionality in real complex chemical systems, using a genetic algorithm. The chemical systems studied here are mixtures of amphiphiles, which spontaneously exhibit a complex variety of self-assembled molecular aggregations, and the property optimized is turbidity. We also experimentally resolve the fitness landscape in some hyper-planes through the space of possible amphiphile formulations, in order to assess the practicality of our optimization method. Our method shows clear and significant progress after testing only 1 (...) % of the possible amphiphile formulations. (shrink)
The issue of whether formal reasoning or a computing-intensive approach is the most efficient manner to address scientific questions is the subject of some considerable debate and pertains not only to the nature of the phenomena and processes investigated by scientists, but also the nature of the equation and algorithm objects they use. Although algorithms and equations both rely on a common background of mathematical language and logic, they nevertheless possess some critical differences. They do not refer to the (...) same level of symbolization, as equations are based on integrated concepts in a denotational manner, while algorithms specifically break down a complex problem into more elementary operations, in an operational manner. They may therefore be considered as suited to the representation of different phenomena. Specifically, algorithms are by nature sufficient to represent weak emergent phenomena, but not strong emergent patterns, while equations can do both. Finally, the choice between equations and algorithms are by nature sufficient to represent weak emergent phenomena, but not strong emergent patterns, while equations behave conversely. We propose a simplified classification of scientific issues for which both equation- and/or algorithm-based approaches can be envisaged, and discuss their respective pros and cons. We further discuss the complementary and sometimes conflicting uses of equations and algorithms in a context of ecological theory of metapopulation dynamics. We finally propose both conceptual and practical guidelines for choosing between the alternative approaches. (shrink)
We extend previous work by modeling evolution of communication using a spatialized genetic algorithm which recombines strategies purely locally. Here cellular automata are used as a spatialized environment in which individuals gain points by capturing drifting food items and are 'harmed' if they fail to hide from migrating predators. Our individuals are capable of making one of two arbitrary sounds, heard only locally by their immediate neighbors. They can respond to sounds from their neighbors by opening their mouths or (...) by hiding. By opening their mouths in the presence of food they maximize gains; by hiding when a predator is present they minimize losses. We consider the result a 'natural' template for benefits from communication; unlike a range of other studies, it is here only the recipient of communicated information that immediately benefits. (shrink)
Previous asymptotically correct algorithms for recovering causal structure from sample probabilities have been limited even in sparse graphs to a few variables. We describe an asymptotically correct algorithm whose complexity for fixed graph connectivity increases polynomially in the number of vertices, and may in practice recover sparse graphs with several hundred variables. From..
In a recent Mind & Society article, Evans (2005) argues for the social and communicative function of conditional statements. In a related article, we argue for satisficing algorithms for mapping conditional statements onto social domains (Eur J Cogn Psychol 16:807â823,2004). The purpose of the present commentary is to integrate these two arguments by proposing a revised pragmatic cues algorithm for pragmatic conditionals.
We developed an on-line intelligent argumentation system which facilitates stakeholders in exchanging dialogues. It provides decision support by capturing stakeholders? rationale through arguments. As part of the argumentation process, stakeholders tend to both polarise their opinions and form polarisation groups. The challenging issue of assessing argumentation polarisation had not been addressed in argumentation systems until recently. Arvapally, Liu, and Jiang [(2012), ?Identification of Faction Groups and Leaders in Web-Based Intelligent Argumentation System for Collaborative Decision Support?, in Proceedings of International Conference (...) on Collaborative Technologies and Systems] earlier developed a method to identify polarisation groups. These groups, however, tend to overlap to a certain degree; each stakeholder may be a member of multiple polarisation groups to varied degrees. Quantifying stakeholders? membership in multiple polarisation groups is an important issue in the argumentation for collaborative decision-making, which is not addressed earlier. We present a novel approach using fuzzy clustering algorithm to address this issue in this article. The method is evaluated using data sets produced from the discussions of 24 stakeholders. Experimental results indicate that our method is effective for both identifying polarisation groups and quantifying stakeholders? degree of membership in each polarisation group. (shrink)
The aim of this paper is to construct a tableau decision algorithm for the modal description logic K ALC with constant domains. More precisely, we present a tableau procedure that is capable of deciding, given an ALC-formula with extra modal operators (which are applied only to concepts and TBox axioms, but not to roles), whether is satisfiable in a model with constant domains and arbitrary accessibility relations. Tableau-based algorithms have been shown to be practical even for logics of rather (...) high complexity. This gives us grounds to believe that, although the satisfiability problem for K ALC is known to be NEXPTIME-complete, by providing a tableau decision algorithm we demonstrate that highly expressive description logics with modal operators have a chance to be implementable. The paper gives a solution to an open problem of Baader and Laux . (shrink)
In , Singer proved that the theory of ordered differential fields has a model completion, i.e, the theory of closed ordered differential fields, CODF. As a result, CODF admits elimination of quantifiers. In this paper we give an algorithm to eliminate the quantifiers of CODF-formulas.
Roger Penrose is justly famous for his work in physics and mathematics but he is _notorious_ for his endorsement of the Gödel argument (see his 1989, 1994, 1997). This argument, first advanced by J. R. Lucas (in 1961), attempts to show that Gödel’s (first) incompleteness theorem can be seen to reveal that the human mind transcends all algorithmic models of it1. Penrose's version of the argument has been seen to fall victim to the original objections raised against Lucas (see Boolos (...) (1990) and for a particularly intemperate review, Putnam (1994)). Yet I believe that more can and should be said about the argument. Only a brief review is necessary here although I wish to present the argument in a somewhat peculiar form. (shrink)
In the technical literature of computer science, the concept of an effective procedure is closely associated with the notion of an instruction that precisely specifies an action. Turing machine instructions are held up as providing paragons of instructions that "precisely describe" or "well define" the actions they prescribe. Numerical algorithms and computer programs are judged effective just insofar as they are thought to be translatable into Turing machine programs. Nontechnical procedures (e.g., recipes, methods) are summarily dismissed as ineffective on the (...) grounds that their instructions lack the requisite precision. But despite the pivotal role played by the notion of a precisely specified instruction in classifying procedures as effective and ineffective, little attention has been paid to the manner in which instructions "precisely specify" the actions they prescribe. It is the purpose of this paper to remedy this defect. The results are startling. The reputed exemplary precision of Turing machine instructions turns out to be a myth. Indeed, the most precise specifications of action are provided not by the procedures of theoretical computer science and mathematics (algorithms) but rather by the nontechnical procedures of everyday life. I close with a discussion of some of the rumifications of these conclusions for understanding and designing concrete computers and their programming languages. (shrink)
In this paper, I argue that accurate indexical representations have been crucial for the survival and reproduction of homo sapiens sapiens. Specifically, I want to suggest that reliable processes have been selected for because of their indirect, but close, connection to true belief during the Pleistocene hunter-gatherer period of our ancestral history. True beliefs are not heritable, reliable processes are heritable. Those reliable processes connected with reasoning take the form of Darwinian Algorithms: a plethora of specialized, domain-specific inference rules designed (...) to solve specific, recurrent, adaptive problems in social exchange contexts. Humans do not reason logically, but adaptively. (shrink)
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase of a certain kind. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of (...) computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind. (shrink)
Deliberate contextual vocabulary acquisition (CVA) is a reader’s ability to figure out a (not the) meaning for an unknown word from its “context”, without external sources of help such as dictionaries or people. The appropriate context for such CVA is the “belief-revised integration” of the reader’s prior knowledge with the reader’s “internalization” of the text. We discuss unwarranted assumptions behind some classic objections to CVA, and present and defend a computational theory of CVA that we have adapted to a new (...) classroom curriculum designed to help students use CVA to improve their reading comprehension. (shrink)
John Locke offered what he considered a sound a priori argument that Mind must come first, must be the original Cause, not merely an Effect: If, then, there must be something eternal, let us see what sort of Being it must be. And to that it is very obvious to Reason, that it must necessarily be a cogitative Being. For it is as impossible to conceive that ever bare incogitative Matter should produce a thinking intelligent Being, as that nothing should (...) of itself produce Matter. Let us suppose any parcel of Matter eternal, great or small, we shall find it, in itself, able to produce nothing. . . . . Matter then, by its own strength, cannot produce in itself so much as Motion: the Motion it has, must also be from Eternity, or else be produced, and added to Matter by some other Being more powerful than Matter. . . . But let us suppose Motion eternal too: yet Matter, incogitative Matter and Motion, whatever changes it might produce of Figure and Bulk, could never produce Thought: Knowledge will still be as far beyond the power of Motion and Matter to produce, as Matter is beyond the power of nothing or nonentity to produce. And I appeal to everyone's own thoughts, whether he cannot as easily conceive Matter produced by nothing, as Thought produced by pure Matter, when before there was no such thing as Thought, or an intelligent Being existing. . . . So if we will suppose nothing first, or eternal: Matter can never begin to be: If we suppose bare Matter, without Motion, eternal: Motion can never being to be: If we suppose only Matter and Motion first, or eternal: Thought can never begin to be. For it is impossible to conceive that Matter either with or without Motion could have originally in and from itself Sense, Perception and Knowledge, as is evident from hence, that then Sense, Perception, and Knowledge must be a property eternally inseparable from Matter and every particle of it. (shrink)
A recent proposal to solve the halting problem with the quantum adiabatic algorithm is criticized and found wanting. Contrary to other physical hypercomputers, where one believes that a physical process “computes” a (recursive-theoretic) non-computable function simply because one believes the physical theory that presumably governs or describes such process, believing the theory (i.e., quantum mechanics) in the case of the quantum adiabatic “hypercomputer” is tantamount to acknowledging that the hypercomputer cannot perform its task.
I examine the branch of evolutionary epistemology which tries to account for the character of cognitive mechanisms in animals and humans by extending the biological theory of evolution to the neurophysiological substrates of cognition. Like Plotkin, I construe this branch as a struggling science, and attempt to characterize the sort of theory one might expect to find this truly interdisciplinary endeavor, an endeavor which encompasses not only evolutionary biology, cognitive psychology, and developmental neuroscience, but also and especially, the computational modeling (...) of artificial life programming; I suggest that extending Schaffner''s notion of interlevel theories to include both horizontal and vertical levels of abstraction best fits the theories currently being developed in cognitive science. Finally, I support this claim with examples drawn from computational modeling data using the genetic algorithm. (shrink)
This paper deals with the question: what are the key requirements for a physical system to perform digital computation? Time and again cognitive scientists are quick to employ the notion of computation simpliciter when asserting basically that cognitive activities are computational. They employ this notion as if there was or is a consensus on just what it takes for a physical system to perform computation, and in particular digital computation. Some cognitive scientists in referring to digital computation simply adhere to (...) Turing’s notion of computability . Classical computability theory studies what functions on the natural numbers are computable and what mathematical problems are undecidable. Whilst a mathematical formalism of computability may perform a methodological function of evaluating computational theories of certain cognitive capacities, concrete computation in physical systems seems to be required for explaining cognition as an embodied phenomenon . There are many non-equivalent accounts of digital computation in physical systems. I examine only a handful of those in this paper: (1) Turing’s account ; (2) The triviality “account”; (3) Reconstructing Smith’s account of participatory computation ; (4) The Algorithm Execution account . My goal in this paper is twofold. First, it is to identify and clarify some of the underlying key requirements mandated by these accounts. I argue that these differing requirements justify a demand that one commits to a particular account when employing the notion of computation in regard to physical systems. Second, it is to argue that despite the informative role that mathematical formalisms of computability may play in cognitive science, they do not specify the relationship between abstract and concrete computation. (shrink)
The paper sets out to offer an alternative to the function/argument approach to the most essential aspects of natural language meanings. That is, we question the assumption that semantic completeness (of, e.g., propositions) or incompleteness (of, e.g., predicates) exactly replicate the corresponding grammatical concepts (of, e.g., sentences and verbs, respectively). We argue that even if one gives up this assumption, it is still possible to keep the compositionality of the semantic interpretation of simple predicate/argument structures. In our opinion, compositionality presupposes (...) that we are able to compare arbitrary meanings in term of information content. This is why our proposal relies on an ‘intrinsically’ type free algebraic semantic theory. The basic entities in our models are neither individuals, nor eventualities, nor their properties, but ‘pieces of evidence’ for believing in the ‘truth’ or ‘existence’ or ‘identity’ of any kind of phenomenon. Our formal language contains a single binary non-associative constructor used for creating structured complex terms representing arbitrary phenomena. We give a finite Hilbert-style axiomatisation and a decision algorithm for the entailment problem of the suggested system. (shrink)
I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and (...) more). There are physically and metaphysically possible machines. There is an iterative hierarchy of logically possible machines in the iterative hierarchy of sets. Some algorithms are such that machines that instantiate them are minds. So there is an iterative hierarchy of finitely and transfinitely complex minds. (shrink)
Boersma’s (1997, 1998) Gradual Learning Algorithm (GLA) performs a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. Data consist of underlying forms paired with the corresponding winner forms. At each iteration, the algorithm needs to complete the current data pair with a corresponding loser form. Tesar and Smolensky (Linguist Inq 29:229–268, 1998) suggest that this current loser should be set equal to the winner predicted by the current ranking. This (...) paper develops a new argument for Tesar and Smolensky’s proposal, based on the GLA’s factorizability. The underlying typology often encodes non-interacting phonological processes, so that it factorizes into smaller typologies that encode a single process each. The GLA should be able to take advantage of this factorizability, in the sense that a run of the algorithm on the original typology should factorize into independent runs on the factor typologies. Factorizability of the GLA is guaranteed provided the current loser is set equal to the current prediction, providing new support for Tesar and Smolensky’s proposal. (shrink)
Roger Penrose is infamous for defending aversion of John Lucas’s argument that Gödel’s incompleteness results show that the mind cannot be mechanistically (or, today, computationally) explained. Penrose’s argument has been subjected to a number of criticisms which, though correct as far as they go, leave open some peculiar and troubling features of the appeal to Gödel’s theorem. I try to reveal these peculiarities and develop a new criticism of the Penrose argument.