Driven by concrete applications, Algorithm Engineering complements theory by the benefits of experimentation and puts equal emphasis on all aspects arising during a cyclic solution process ranging from realistic modeling, design, analysis, ...
The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy (...) of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers “Can’t tell” for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted. (shrink)
Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of an (...) argument for the AI-thesis, and suggests making a distinction between a broad and a narrow notion of algorithm that might be used to reformulate the question as a foundational problem for argumentation theory. (shrink)
if and only if for every W in V, W is independent of the set of all its non-descendants conditional on the set of its parents. One natural question that arises with respect to DAGs is when two DAGs are “statistically equivalent”. One interesting sense of “statistical equivalence” is “d-separation equivalence” (explained in more detail below.) In the case of DAGs, d-separation equivalence is also corresponds to a variety of other natural senses of statistical equivalence (such as representing the same (...) set of distributions). Theorems characterizing d-separation equivalence for directed acyclic graphs and that can be used as the basis for polynomial time algorithms for checking d-separation equivalence were provided by Verma and Pearl (1990), and Frydenberg (1990). The question we will examine is how to extend these results to cases where a DAG may have latent (unmeasured) variables or selection bias (i.e. some of the variables in the DAG have been conditioned on.) D-separation equivalence is of interest in part because there are algorithms for constructing DAGs with latent variables and selection bias that are based on observed conditional independence relations. For this class of algorithms, it is impossible to determine which of two d-separation equivalent causal structures generated a given probability distribution, given only the set of conditional independence and dependence relations true of the observed distribution. We will describe a polynomial (in the number of vertices) time algorithm for determining when two DAGs which may have latent variables or selection bias are d-separation equivalent. (shrink)
A substantial amount of recent work in natural language generation has focused on the generation of ‘‘one-shot’’ referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns (...) out that the success of the IA depends substantially on the ‘‘preference order’’ (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. (shrink)
In a fundamental paper [Phys. Rev. Lett. 78, 325 (1997)] Grover showed how a quantum computer can …nd a single marked object in a database of size N by using only O(pN ) queries of the oracle that identi…es the object. His result was generalized to the case of …nding one object in a subset of marked elements. We consider the following computational problem: A subset of marked elements is given whose number of elements is either M or K, M (...) < K, our task is to determine which is the case. We show how to solve this problem with a high probability of success using only iterations of Grover’s basic step (and no other algorithm). Let m be the required number of iterations; we prove that under certain restrictions on the sizes of M and K the estimation.. (shrink)
We demonstrate a method for optimizing desired functionality in real complex chemical systems, using a genetic algorithm. The chemical systems studied here are mixtures of amphiphiles, which spontaneously exhibit a complex variety of self-assembled molecular aggregations, and the property optimized is turbidity. We also experimentally resolve the fitness landscape in some hyper-planes through the space of possible amphiphile formulations, in order to assess the practicality of our optimization method. Our method shows clear and significant progress after testing only 1 (...) % of the possible amphiphile formulations. (shrink)
The issue of whether formal reasoning or a computing-intensive approach is the most efficient manner to address scientific questions is the subject of some considerable debate and pertains not only to the nature of the phenomena and processes investigated by scientists, but also the nature of the equation and algorithm objects they use. Although algorithms and equations both rely on a common background of mathematical language and logic, they nevertheless possess some critical differences. They do not refer to the (...) same level of symbolization, as equations are based on integrated concepts in a denotational manner, while algorithms specifically break down a complex problem into more elementary operations, in an operational manner. They may therefore be considered as suited to the representation of different phenomena. Specifically, algorithms are by nature sufficient to represent weak emergent phenomena, but not strong emergent patterns, while equations can do both. Finally, the choice between equations and algorithms are by nature sufficient to represent weak emergent phenomena, but not strong emergent patterns, while equations behave conversely. We propose a simplified classification of scientific issues for which both equation- and/or algorithm-based approaches can be envisaged, and discuss their respective pros and cons. We further discuss the complementary and sometimes conflicting uses of equations and algorithms in a context of ecological theory of metapopulation dynamics. We finally propose both conceptual and practical guidelines for choosing between the alternative approaches. (shrink)
We extend previous work by modeling evolution of communication using a spatialized genetic algorithm which recombines strategies purely locally. Here cellular automata are used as a spatialized environment in which individuals gain points by capturing drifting food items and are 'harmed' if they fail to hide from migrating predators. Our individuals are capable of making one of two arbitrary sounds, heard only locally by their immediate neighbors. They can respond to sounds from their neighbors by opening their mouths or (...) by hiding. By opening their mouths in the presence of food they maximize gains; by hiding when a predator is present they minimize losses. We consider the result a 'natural' template for benefits from communication; unlike a range of other studies, it is here only the recipient of communicated information that immediately benefits. (shrink)
Previous asymptotically correct algorithms for recovering causal structure from sample probabilities have been limited even in sparse graphs to a few variables. We describe an asymptotically correct algorithm whose complexity for fixed graph connectivity increases polynomially in the number of vertices, and may in practice recover sparse graphs with several hundred variables. From..
In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order. The authors argue that there are potentially many different Preference Orders that could be considered, while often no evidence is available to determine which is a (...) good one. In this brief note, however, we suggest (based on a learning curve experiment) that finding a Preference Order for a new domain may not be so difficult after all, as long as one has access to a handful of human-produced descriptions collected in a semantically transparent way. We argue that this is due to the fact that it is both more important and less difficult to get a good ordering of the head than of the tail of a Preference Order. (shrink)
In a recent Mind & Society article, Evans (2005) argues for the social and communicative function of conditional statements. In a related article, we argue for satisficing algorithms for mapping conditional statements onto social domains (Eur J Cogn Psychol 16:807â823,2004). The purpose of the present commentary is to integrate these two arguments by proposing a revised pragmatic cues algorithm for pragmatic conditionals.
The famous Allen's interval relations constraint propagation algorithm was intended for linear time. Its 13 primitive relations define all the possible mutual locations of two intervals on the time-axis. In this paper an application of the algorithm for non-linear time is suggested. First, a new primitive relation is added. It is called excludes since an occurrence of one event in a certain course of events excludes an occurrence of the other event in this course. Next, new composition rules (...) for relations between intervals are presented: some of the old rules are extended by the relation excludes, and entirely new ones are formulated for composing the relation excludes with the other relations. Four different composition tables are considered. The choice of a composition table depends on whether time is branching or not, and whether intervals can contain non-collinear subintervals or not. (shrink)
The aim of this paper is to construct a tableau decision algorithm for the modal description logic K ALC with constant domains. More precisely, we present a tableau procedure that is capable of deciding, given an ALC-formula with extra modal operators (which are applied only to concepts and TBox axioms, but not to roles), whether is satisfiable in a model with constant domains and arbitrary accessibility relations. Tableau-based algorithms have been shown to be practical even for logics of rather (...) high complexity. This gives us grounds to believe that, although the satisfiability problem for K ALC is known to be NEXPTIME-complete, by providing a tableau decision algorithm we demonstrate that highly expressive description logics with modal operators have a chance to be implementable. The paper gives a solution to an open problem of Baader and Laux . (shrink)
This response discusses the experiment reported in Krahmer et al.’s Letter to the Editor of Cognitive Science. We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from ‘‘normal” (i.e., non-semantically-balanced) corpora.
Roger Penrose is justly famous for his work in physics and mathematics but he is _notorious_ for his endorsement of the Gödel argument (see his 1989, 1994, 1997). This argument, first advanced by J. R. Lucas (in 1961), attempts to show that Gödel’s (first) incompleteness theorem can be seen to reveal that the human mind transcends all algorithmic models of it1. Penrose's version of the argument has been seen to fall victim to the original objections raised against Lucas (see Boolos (...) (1990) and for a particularly intemperate review, Putnam (1994)). Yet I believe that more can and should be said about the argument. Only a brief review is necessary here although I wish to present the argument in a somewhat peculiar form. (shrink)
In the technical literature of computer science, the concept of an effective procedure is closely associated with the notion of an instruction that precisely specifies an action. Turing machine instructions are held up as providing paragons of instructions that "precisely describe" or "well define" the actions they prescribe. Numerical algorithms and computer programs are judged effective just insofar as they are thought to be translatable into Turing machine programs. Nontechnical procedures (e.g., recipes, methods) are summarily dismissed as ineffective on the (...) grounds that their instructions lack the requisite precision. But despite the pivotal role played by the notion of a precisely specified instruction in classifying procedures as effective and ineffective, little attention has been paid to the manner in which instructions "precisely specify" the actions they prescribe. It is the purpose of this paper to remedy this defect. The results are startling. The reputed exemplary precision of Turing machine instructions turns out to be a myth. Indeed, the most precise specifications of action are provided not by the procedures of theoretical computer science and mathematics (algorithms) but rather by the nontechnical procedures of everyday life. I close with a discussion of some of the rumifications of these conclusions for understanding and designing concrete computers and their programming languages. (shrink)
In this paper, I argue that accurate indexical representations have been crucial for the survival and reproduction of homo sapiens sapiens. Specifically, I want to suggest that reliable processes have been selected for because of their indirect, but close, connection to true belief during the Pleistocene hunter-gatherer period of our ancestral history. True beliefs are not heritable, reliable processes are heritable. Those reliable processes connected with reasoning take the form of Darwinian Algorithms: a plethora of specialized, domain-specific inference rules designed (...) to solve specific, recurrent, adaptive problems in social exchange contexts. Humans do not reason logically, but adaptively. (shrink)
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase of a certain kind. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of (...) computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind. (shrink)
Deliberate contextual vocabulary acquisition (CVA) is a reader’s ability to figure out a (not the) meaning for an unknown word from its “context”, without external sources of help such as dictionaries or people. The appropriate context for such CVA is the “belief-revised integration” of the reader’s prior knowledge with the reader’s “internalization” of the text. We discuss unwarranted assumptions behind some classic objections to CVA, and present and defend a computational theory of CVA that we have adapted to a new (...) classroom curriculum designed to help students use CVA to improve their reading comprehension. (shrink)
John Locke offered what he considered a sound a priori argument that Mind must come first, must be the original Cause, not merely an Effect: If, then, there must be something eternal, let us see what sort of Being it must be. And to that it is very obvious to Reason, that it must necessarily be a cogitative Being. For it is as impossible to conceive that ever bare incogitative Matter should produce a thinking intelligent Being, as that nothing should (...) of itself produce Matter. Let us suppose any parcel of Matter eternal, great or small, we shall find it, in itself, able to produce nothing. . . . . Matter then, by its own strength, cannot produce in itself so much as Motion: the Motion it has, must also be from Eternity, or else be produced, and added to Matter by some other Being more powerful than Matter. . . . But let us suppose Motion eternal too: yet Matter, incogitative Matter and Motion, whatever changes it might produce of Figure and Bulk, could never produce Thought: Knowledge will still be as far beyond the power of Motion and Matter to produce, as Matter is beyond the power of nothing or nonentity to produce. And I appeal to everyone's own thoughts, whether he cannot as easily conceive Matter produced by nothing, as Thought produced by pure Matter, when before there was no such thing as Thought, or an intelligent Being existing. . . . So if we will suppose nothing first, or eternal: Matter can never begin to be: If we suppose bare Matter, without Motion, eternal: Motion can never being to be: If we suppose only Matter and Motion first, or eternal: Thought can never begin to be. For it is impossible to conceive that Matter either with or without Motion could have originally in and from itself Sense, Perception and Knowledge, as is evident from hence, that then Sense, Perception, and Knowledge must be a property eternally inseparable from Matter and every particle of it. (shrink)
Boersma’s (1997, 1998) Gradual Learning Algorithm (GLA) performs a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. Data consist of underlying forms paired with the corresponding winner forms. At each iteration, the algorithm needs to complete the current data pair with a corresponding loser form. Tesar and Smolensky (Linguist Inq 29:229–268, 1998) suggest that this current loser should be set equal to the winner predicted by the current ranking. This (...) paper develops a new argument for Tesar and Smolensky’s proposal, based on the GLA’s factorizability. The underlying typology often encodes non-interacting phonological processes, so that it factorizes into smaller typologies that encode a single process each. The GLA should be able to take advantage of this factorizability, in the sense that a run of the algorithm on the original typology should factorize into independent runs on the factor typologies. Factorizability of the GLA is guaranteed provided the current loser is set equal to the current prediction, providing new support for Tesar and Smolensky’s proposal. (shrink)
Roger Penrose is infamous for defending aversion of John Lucas’s argument that Gödel’s incompleteness results show that the mind cannot be mechanistically (or, today, computationally) explained. Penrose’s argument has been subjected to a number of criticisms which, though correct as far as they go, leave open some peculiar and troubling features of the appeal to Gödel’s theorem. I try to reveal these peculiarities and develop a new criticism of the Penrose argument.
Given a set of objects characterized by a number of attributes, hidden patterns can be discovered in them for the grouping of similar objects into clusters. If each of these clusters can be considered as exemplifying a certain concept, then the problem concerned can be referred to as a concept discovery problem. This concept discovery problem can be solved to some extent by existing data clustering techniques. However, they may not be applicable when the concept involved is vague in nature (...) or when the attributes characterizing the objects can be qualitative, quantitative, and fuzzy at the same time. To discover such concepts from objects with such characteristics, we propose a Genetic-Algorithm-based technique. By encoding a specific object grouping in a chromosome and a fitness measure to evaluate the cluster quality, the proposed technique is able to discover meaningful fuzzy clusters and assign membership degrees to objects that do not fully exemplify a certain concept. For evaluation, we tested the proposed technique with simulated and real data and the results are found to be very promising. (shrink)
The development of mathematics toward greater precision has led, as is well known, to the formalization of large tracts of it, so that one can prove any theorem using nothing but a few mechanical rules. -- Godel .
An important approach to game theory is to examine the consequences of beliefs that agents may have about each other. This paper investigates respect for public preferences. Consider an agent A who believes that B strictly prefers an option a to an option b. Then A respects B’s preference if A assigns probability 1 to the choice of a given that B chooses a or b. Respect for public preferences requires that if it is common belief that B prefers a (...) to b, then it is common belief that all other agents respect that preference. Along the lines of Blume, Brandenburger and Dekel  and Asheim , I treat respect for public preferences as a constraint on lexicographic probability systems. The main result is that given respect for public preferences and perfect recall, players choose in accordance with Iterated Backward Inference. Iterated Backward Inference is a procedure that generalizes standard backward induction reasoning for games of both perfect and imperfect information. From Asheim’s characterization of proper rationalizability  it follows that properly rationalizable strategies are consistent with respect for public preferences; hence strategies eliminated by Iterated Backward Inference are not properly rationalizable. (shrink)
There are many algorithm texts that provide lots of well-polished code and proofs of correctness. Instead, this book presents insights, notations, and analogies to help the novice describe and think about algorithms like an expert. By looking at both the big picture and easy step-by-step methods for developing algorithms, the author helps students avoid the common pitfalls. He stresses paradigms such as loop invariants and recursion to unify a huge range of algorithms into a few meta-algorithms. Part of the (...) goal is to teach the students to think abstractly. Without getting bogged with formal proofs, the book fosters a deeper understanding of how and why each algorithm works. These insights are presented in a slow and clear manner accessible to second- or third-year students of computer science, preparing them to find their own innovative ways to solve problems. (shrink)
A fixed-parameter is an algorithm that provides an optimal solution to a combinatorial problem. This research-level text is an application-oriented introduction to the growing and highly topical area of the development and analysis of efficient fixed-parameter algorithms for hard problems. The book is divided into three parts: a broad introduction that provides the general philosophy and motivation; followed by coverage of algorithmic methods developed over the years in fixed-parameter algorithmics forming the core of the book; and a discussion of (...) the essential from parameterized hardness theory with a focus on W -hardness, which parallels NP-hardness, then stating some relations to polynomial-time approximation algorithms, and finishing up with a list of selected case studies to show the wide range of applicability of the presented methodology. Aimed at graduate and research mathematicians, programmers, algorithm designers and computer scientists, the book introduces the basic techniques and results and provides a fresh view on this highly innovative field of algorithmic research. (shrink)
In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
Contents 1. Introduction 2. Reward-Guided Decision Making 3. Content in the Model 4. How to Deflate a Metarepresentational Reading Proust and Carruthers on metacognitive feelings 5. A Deflationary Treatment of RPEs? 5.1 Dispensing with prediction errors 5.2 What is use of the RPE focused on? 5.3 Alternative explanations—worldly correlates 5.4 Contrast cases 6. Conclusion Appendix: Temporal Difference Learning Algorithms.
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first (...) shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
It is common in cognitive science to equate computation (and in particular digital computation) with information processing. Yet, it is hard to find a comprehensive explicit account of concrete digital computation in information processing terms. An information processing account seems like a natural candidate to explain digital computation. But when ‘information’ comes under scrutiny, this account becomes a less obvious candidate. Four interpretations of information are examined here as the basis for an information processing account of digital computation, namely Shannon (...) information, algorithmic information, factual information and instructional information. I argue that any plausible account of concrete computation has to be capable of explaining at least the three key algorithmic notions of input, output and procedures. Whist algorithmic information fares better than Shannon information, the most plausible candidate for an information processing account is instructional information. (shrink)
In this paper, we investigate the constitutive problems and other several aspects of what a research entitled 'formal epistemology' should be. The interest in this subject has to do with the possibility of reaching a privileged point of view or axis of research - i.e., the 'formal' one - that would allow, a better grasp of the richness and variety of the facts and problems tackled by precise (local) epistemology of theories (for example, in physics). This approach is likely to (...) enable one to hold the main structural lines that organize those theories according to a more com prehensive, unifying and synthetic intelligibilit. By the same token, it would eventually allow a better handling of the changes required in the organization of knowledge, putting emphasis on its main directions, drawing up a rational inventory of this knowledge, and perhaps anticipating others. At first, we deal with the `thought of changes' that no approach of the 'form' can afford to leave aside, since the meaning of this concept is inseparable from the contents that come with constructions and modifications. We examine then the notion of 'epistemic operation' as an instrument to create new forms on the theoretical as well as on the meta-theoretical levels. ln the wake of it, we analyze the characteristics of the form and of the formal, as well as their relationship with the contents of knowledge. We also take the notion of object into account, since it depends upon the decision of a subject and upon conventional choices. We finally inquire about the link between `epistemic operations' as specified above and algorithmic functions for knowledge statements, and emphasize the risk of reductionism that might follow from a naturalistic conception of representation. (shrink)
In Darwin’s Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett’s analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they are implemented as algorithms by physical (...) processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational. (shrink)
I argue that influential purely syntactic views of computation, shared by such philosophers as John Searle and Hilary Putnam, are mistaken. First, I discuss common objections, and during the discussion I mention additional necessary conditions of implementation of computations in physical processes that are neglected in classical philosophical accounts of computation. Then I try to show why realism in regards of physical computations is more plausible, and more coherent with any realistic attitude towards natural science than the received view, and (...) distinguish computational simulation, implementation, and re-engineering. I also point out the sources of confusion about what computation is that seem to stem from disregarding the use/mention distinction. (shrink)
The current debate over systematicity concerns the formal conditions a scheme of mental representation must satisfy in order to explain the systematicity of thought.1 The systematicity of thought is assumed to be a pervasive property of minds, and can be characterized (roughly) as follows: anyone who can think T can think systematic variants of T, where the systematic variants of T are found by permuting T’s constituents. So, for example, it is an alleged fact that anyone who can think the (...) thought that John loves Mary can think the thought that Mary loves John, where the latter thought is a systematic variant of the former. (shrink)
Context: Consistency of mathematical constructions in numerical analysis and the application of computerized proofs in the light of the occurrence of numerical chaos in simple systems. Purpose: To show that a computer in general and a numerical analysis in particular can add its own peculiarities to the subject under study. Hence the need of thorough theoretical studies on chaos in numerical simulation. Hence, a questioning of what e.g. a numerical disproof of a theorem in physics or a prediction in numerical (...) economics could mean. Method: An algebraic simple model system is subjected to a deeper structure of underlying variables. With an algorithm simulating the steps in taking a limit of second order difference quotients the error terms are studied at the background of their algebraic expression. Results: With the algorithm that was applied to a simple quadratic polynomial system we found unstably amplified round-off errors. The possibility of numerical chaos is already known but not in such a simple system as used in our paper. The amplification of the errors implies that it is not possible with computer means to constructively show that the algebra and numerical analysis will ‘on the long run’ converge to each other and the error term will vanish. The algebraic vanishing of the error term cannot be demonstrated with the use of the computer because the round-off errors are amplified. In philosophical terms, the amplification of the round-off error is equivalent to the continuum hypothesis. This means that the requirement of (numerical) construction of mathematical objects is no safeguard against inference-only conclusions of qualities of (numerical) mathematical objects. Unstably amplified round-off errors are a same type of problem as the ordering in size of transfinite cardinal numbers. The difference is that the former problem is created within the requirements of constructive mathematics. This can be seen as the reward for working numerically constructive. (shrink)
In the current dialogue of “science and religion,” it is widely assumed that the thoughts of Darwinists and that of atheists overlap. However, Jerry Fodor, a full-fledged atheist, recently announced a war against Darwinism with his atheistic campaign. Prima facie, this “civil war” might offer a chance for theists: If Fodor is right, Darwinistic atheism will lose the cover of Darwinism and become less tenable. This paper provides a more pessimistic evaluation of the situation by explaining the following: Fodor’s criticism (...) of adaptationism (as the backbone of Darwinism), viz., his refutation of any counterfactual-supporting laws on the macro-evolutionary level, implies that a law-maker is dispensable on this level. This will either encourage skepticism against the omniscience (at least that concerning the future of macro-evolution) of the Creator, or render the notion of God less appealing. (shrink)
John Searle dismisses the attempt to understand thought as a form of computation, on the grounds that it is not scientific. Science is concerned with intrinsic properties, i.e. those features which are not observer relative, e.g. science is concerned with mass but not with beauty. Computation, according to Searle, presupposes the property of following an algorithm, but algorithmicity is normative, by reason of appealing to function, and hence not intrinsic. I argue that Searle's critique presupposes the folk notion of (...) function, which is indeed normative. But this folk notion can be replaced by a purely descriptive analogue, thereby showing that algorithmicity can be construed as intrinsic after all. (shrink)
The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
This book constitutes the refereed proceedings of the Third International Symposium on Stochastic Algorithms: Foundations and Applications, SAGA 2005, held in Moscow, Russia in October 2005. The 14 revised full papers presented together with 5 invited papers were carefully reviewed and selected for inclusion in the book. The contributed papers included in this volume cover both theoretical as well as applied aspects of stochastic computations whith a special focus on new algorithmic ideas involving stochastic decisions and the design and evaluation (...) of stochastic algorithms within realistic scenarios. (shrink)
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding false positives and (...) false negatives. This preference ultimately depends on a number of value-judgments. In the last section of the paper we discuss some general principles for dealing with ethical issues in algorithm-design. (shrink)
By bootstrapping the output of the PC algorithm (Spirtes et al., 2000; Meek 1995), using larger conditioning sets informed by the current graph state, it is possible to define a novel algorithm, JPC, that improves accuracy of search for i.i.d. data drawn from linear, Gaussian, sparse to moderately dense models. The motivation for constructing sepsets using information in the current graph state is to highlight the differences between d-‐separation information in the graph and conditional independence information extracted from (...) the sample. The same idea can be pursued for any algorithm for which conditioning sets informed by the current graph state can be constructed and for which an orientation procedure capable of orienting undirected graphs can be extracted. Another plausible candidate for such retrofitting is the CPC algorithm (Ramsey et al, 2006), yielding an algorithm, JCPC, which, when the true graph is sparse is somewhat more accurate than JPC. The method is not feasible for discovery for models of categorical variables, i.e., traditional Bayes nets; with alternative tests for conditional independence it may extend to non-‐linear or non-‐Gaussian models, or both. (shrink)
Neural algorithms are conserved during evolution. Neurons with different shapes and using different molecular mechanisms can perform the same computation. However, evolutionary conservation of neural algorithms is not sufficient for claiming the realization of an algorithm for a specific computational problem. A plausible scheme for ontogenetic emergence of the structure of the algorithm must also be provided.
"The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion (...) of an algorithm as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the architecture of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not what it does but how it does it. What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Go "del's formula to be true depends on it. He also suggests that quantum phenomena underly consciousness. This is rebutted by arguing that our existing concept of "consciousness" is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Goedel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed. (shrink)
The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter $T$ is (...) gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment. (shrink)
Why should one believe that conscious awareness is solely the result of organizational complexity? What is the connection between consciousness and combinatorics: transformation of quantity into quality? The claim that the former is reducible to the other seems unconvincing—as unlike as chalk and cheese! In his book1 Penrose is at least attempting to compare like with like: the enigma of consciousness with the progress of physics.
This paper sets out to adduce visual evidence for Kroneckerian finitism by making perspicuous some of the insights that buttress Kronecker’s conception of arithmetization as a process aiming at disclosing the arithmetical essence enshrined in analytical formulas, by spotting discrete patterns through algorithmic fine-tuning. In the light of a fairly tractable case study, it is argued that Kronecker’s main tenet in philosophy of mathematics is not so much an ontological as a methodological one, inasmuch as highly demanding requirements regarding mathematical (...) understanding prevail over mere preemptive reductionism to whole numbers. (shrink)
There have been many efforts to infer causation from association byusing statistical models. Algorithms for automating this processare a more recent innovation. In Humphreys and Freedman[(1996) British Journal for the Philosophy of Science 47, 113–123] we showed that one such approach, by Spirtes et al., was fatally flawed. Here we put our arguments in a broader context and reply to Korb and Wallace [(1997) British Journal for thePhilosophy of Science 48, 543–553] and to Spirtes et al.[(1997) British Journal for the (...) Philosophy of Science 48, 555–568]. Their arguments leave our position unchanged: claims to have developed a rigorous engine for inferring causation from association are premature at best, the theorems have no implications for samples of any realistic size, and the examples used to illustrate the algorithms are indicative of failure rather than success. The gap between association and causation has yet to be bridged. (shrink)
The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon , even more important than it is today. I argue that the way in which we experience ourselves necessarily goes (...) through a moment of technical mediation. Because of this algorithmic profiling that thrives on continuous reconfiguration of identification should not be understood as a supplementary process which maps a pre-established identity that exists independently from the profiling practice. In order to clarify how the experience of one’s identity can become affected by such machine-profiling a theoretical exploration of identity is made (including Agamben’s understanding of an apparatus , Ricoeur’s distinction between idem - and ipse -identity, and Stiegler’s notion of a conjunctive–disjunctive relationship towards retentional apparatuses ). Although it is clear that no specific predictions about the impact of Ambient Intelligent technologies can be made without taking more particulars into account, the theoretical concepts are used to describe three general scenarios about the way wherein the experience of identity might become affected. To conclude, I argue that the experience of one’s identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern. (shrink)
I discuss the philosophical implications that the rising new science of quantum computing may have on the philosophy of computer science. While quantum algorithms leave the notion of Turing-Computability intact, they may re-describe the abstract space of computational complexity theory hence militate against the autonomous character of some of the concepts and categories of computer science.
clusions are only probably correct. On the other hand, algorithmic information theory provides a precise mathematical definition of the notion of random or patternless sequence. In this paper we shall describe conditions under which if the sequence of coin tosses in the Solovay– Strassen and Miller–Rabin algorithms is replaced by a sequence of heads and tails that is of maximal algorithmic information content, i.e., has maximal algorithmic randomness, then one obtains an error-free test for primality. These results are only of (...) theoretical interest, since it is a manifestation of the G¨ odel incompleteness phenomenon that it is impossible to “certify” a sequence to be random by means of a proof, even though most sequences have this property. Thus by using certified random sequences one can in principle, but not in practice, convert probabilistic tests for primality into deterministic ones. (shrink)
We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an adversarial relationship (...) between two players, each trying to win the game. In a constraint-satisfaction, problem, such as the 8-Queens problem, the task is to find a state that satisfies a set of constraints. All of these problems are computationally intensive, and heuristic evaluation functions are used to reduce the amount of computation required to solve them. In each case we explain the nature of the evaluation functions used, how they are used in search algorithms, and how they can be automatically learned or acquired. (shrink)
Algorithmic information theory, or the theory of Kolmogorov complexity, has become an extraordinarily popular theory, and this is no doubt due, in some part, to the fame of Chaitin’s incompleteness results arising from this ﬁeld. Actually, there are two rather different results by Chaitin: the earlier one concerns the ﬁnite limit of the provability of complexity (see Chaitin, 1974a, 1974b, 1975a); and the later is related to random reals and the halting probability (see Chaitin, 1986, 1987a, 1987b, 1988, 1989.
There is significant interest in type-free systems that allow flexible self-application. Such systems are of interest in property theory, natural language semantics, the theory of truth, theoretical computer science, the theory of classes, and category theory. While there are a variety of proposed type-free systems, there is a particularly natural type-free system that we believe is prototypical: the logic of recursive algorithms. Algorithmic logic is the study of basic statements concerning algorithms and the algorithmic rules of inference between such statements. (...) As shown in , the threat of paradoxes, such as the Curry paradox, requires care in implementing rules of inference in this context. As in any type-free logic, some traditional rules will fail. The first part of the paper develops a rich collection of inference rules that do not lead to paradox. The second part identifies traditional rules of logic that are paradoxical in algorithmic logic, and so should be viewed with suspicion in type-free logic generally. (shrink)
According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers (...) of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise. (shrink)
Recently, certain philosophers of mathematics (Fallis ; Womack and Farach [(1997]) have argued that there are no epistemic considerations that should stop mathematicians from using probabilistic methods to establish that mathematical propositions are true. However, mathematicians clearly should not use methods that are unreliable. Unfortunately, due to the fact that randomized algorithms are not really random in practice, there is reason to doubt their reliability. In this paper, I analyze the prospects for establishing that randomized algorithms are reliable. I end (...) by arguing that it would be inconsistent for mathematicians to suspend judgement on the truth of mathematical propositions on the basis of worries about the reliability of randomized algorithms. (shrink)
We develop a functional abstraction principle for the type-free algorithmic logic introduced in our earlier work. Our approach is based on the standard combinators but is supplemented by the novel use of evaluation trees. Then we show that the abstraction principle leads to a Curry fixed point, a statement C that asserts C ⇒ A where A is any given statement. When A is false, such a C yields a paradoxical situation. As discussed in our earlier work, this situation leaves (...) one no choice but to restrict the use of a certain class of implicational rules including modus ponens. (shrink)
Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system Kl-one. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableau-like algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which (...) tableau procedures have been quite successful.Nevertheless, due to different underlying institutions and applications, most description logics differ significantly from run-of-the-mill modal and program logics. Consequently, the research on tableau algorithms in description logics led to new techniques and results, which are, however, also of interest for modal logicians. In this article, we will focus on three features that play an important rôle in description logics (number restrictions, terminological axioms, and role constructors), and show how they can be taken into account by tableau algorithms. (shrink)
Independence of Conditionals (IC) has recently been proposed as a basic rule for causal structure learning. If a Bayesian network represents the causal structure, its Conditional Probability Distributions (CPDs) should be algorithmically independent. In this paper we compare IC with causal faithfulness (FF), stating that only those conditional independences that are implied by the causal Markov condition hold true. The latter is a basic postulate in common approaches to causal structure learning. The common spirit of FF and IC is to (...) reject causal graphs for which the joint distribution looks ‘non-generic’. The difference lies in the notion of genericity: FF sometimes rejects models just because one of the CPDs is simple, for instance if the CPD describes a deterministic relation. IC does not behave in this undesirable way. It only rejects a model when there is a non-generic relation between different CPDs although each CPD looks generic when considered separately. Moreover, it detects relations between CPDs that cannot be captured by conditional independences. IC therefore helps in distinguishing causal graphs that induce the same conditional independences (i.e., they belong to the same Markov equivalence class). The usual justification for FF implicitly assumes a prior that is a probability density on the parameter space. IC can be justified by Solomonoff’s universal prior, assigning non-zero probability to those points in parameter space that have a finite description. In this way, it favours simple CPDs, and therefore respects Occam’s razor. Since Kolmogorov complexity is uncomputable, IC is not directly applicable in practice. We argue that it is nevertheless helpful, since it has already served as inspiration and justification for novel causal inference algorithms. (shrink)
James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.
People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.
Evolution is often characterized as a tinkerer that creates efficient but messy solutions to problems. We analyze the nature of the problems that arise when we try to explain and understand cognitive phenomena created by this haphazard design process. We present a theory of explanation and understanding and apply it to a case problem – solutions generated by genetic algorithms. By analyzing the nature of solutions that genetic algorithms present to computational problems, we show that the reason for why evolutionary (...) designs are often hard to understand is that they exhibit non-modular functionality, and that breaches of modularity wreak havoc on our strategies of causal and constitutive explanation. (shrink)