The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, (...) 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. (...) We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. (shrink)
Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results (...) and is based on solid philosophical foundations, the requisite technical knowledge necessary for understanding this framework has caused it to remain largely unknown and unappreciated in the wider scientific community. The main contribution of this article is to convey Solomonoff induction and its related concepts in a generally accessible form with the aim of bridging this current technical gap. In the process we examine the major historical contributions that have led to the formulation of Solomonoff Induction as well as criticisms of Solomonoff and induction in general. In particular we examine how Solomonoff induction addresses many issues that have plagued other inductive systems, such as the black ravens paradox and the confirmation problem, and compare this approach with other recent approaches. (shrink)
Increasingly encompassing models have been suggested for our world. Theories range from generally accepted to increasingly speculative to apparently bogus. The progression of theories from ego- to geo- to helio-centric models to universe and multiverse theories and beyond was accompanied by a dramatic increase in the sizes of the postulated worlds, with humans being expelled from their center to ever more remote and random locations. Rather than leading to a true theory of everything, this trend faces a turning point after (...) which the predictive power of such theories decreases (actually to zero). Incorporating the location and other capacities of the observer into such theories avoids this problem and allows to distinguish meaningful from predictively meaningless theories. This also leads to a truly complete theory of everything consisting of a (conventional objective) theory of everything plus a (novel subjective) observer process. The observer localization is neither based on the controversial anthropic principle, nor has it anything to do with the quantum-mechanical observation process. The suggested principle is extended to more practical (partial, approximate, probabilistic, parametric) world models (rather than theories of everything). Finally, I provide a justification of Ockham's razor, and criticize the anthropic principle, the doomsday argument, the no free lunch theorem, and the falsifiability dogma.", . (shrink)
The progression of theories suggested for our world, from ego- to geo- to helio-centric models to universe and multiverse theories and beyond, shows one tendency: The size of the described worlds increases, with humans being expelled from their center to ever more remote and random locations. If pushed too far, a potential theory of everything (TOE) is actually more a theories of nothing (TON). Indeed such theories have already been developed. I show that including observer localization into such theories is (...) necessary and sufficient to avoid this problem. I develop a quantitative recipe to identify TOEs and distinguish them from TONs and theories in-between. This precisely shows what the problem is with some recently suggested universal TOEs. (shrink)
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to (...) augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity. (shrink)
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being (...) true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter) examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. (shrink)
Modality, morality and belief are among the most controversial topics in philosophy today, and few philosophers have shaped these debates as deeply as Ruth Barcan Marcus. Inspired by her work, a distinguished group of philosophers explore these issues, refine and sharpen arguments and develop new positions on such topics as possible worlds, moral dilemmas, essentialism, and the explanation of actions by beliefs. This 'state of the art' collection honours one of the most rigorous and iconoclastic of philosophical pioneers.
Marcus argues that moral dilemmas are real, but that they are not the result of inconsistent moral principles. Moral principles are consistent just in case there is some world where all principles are 'obeyable.' They are inconsistent just in case there is no world where all are 'obeyable.' What this logical point is meant to show is that moral dilemmas do not make moral codes inconsistent. She also discusses guilt, and argues that guilt is still appropriate even in cases (...) of conflict, even when the agent thinks the right thing to do is clear. (shrink)
Shaping the Future maps out the ascetic practices of a Neitzschean way of life. Hutter argues that Nietzsche's doctrines are attempts and "temptations" that aim to provoke his free-spirited readers into changing themselves by putting philosophy into practice in their lives.
This book challenges the conventional wisdom that improving democratic politics requires keeping emotion out of it. Marcus advances the provocative claim that the tradition in democratic theory of treating emotion and reason as hostile opposites is misguided and leads contemporary theorists to misdiagnose the current state of American democracy. Instead of viewing the presence of emotion in politics as a failure of rationality and therefore as a failure of citizenship, Marcus argues, democratic theorists need to understand that emotions (...) are in fact a prerequisite for the exercise of reason and thus essential for rational democratic deliberation and political judgment. Attempts to purge emotion from public life not only are destined to fail, but ultimately would rob democracies of a key source of revitalization and change. Drawing on recent research in neuroscience, Marcus shows how emotion functions generally and what role it plays in politics. In contrast to the traditional view of emotion as a form of agitation associated with belief, neuroscience reveals it to be generated by brain systems that operate largely outside of awareness. Two of these systems, "disposition" and "surveillance," are especially important in enabling emotions to produce habits, which often serve a positive function in democratic societies. But anxiety, also a preconscious emotion, is crucial to democratic politics as well because it can inhibit or disable habits and thus clear a space for the conscious use of reason and deliberation. If we acknowledge how emotion facilitates reason and is "cooperatively entangled" with it, Marcus concludes, then we should recognize sentimental citizens as the only citizens really capable of exercising political judgment and of putting their decisions into action. (shrink)
Originally published in 1984, this book broke new ground in assessing Freud as both an exemplary late-Victorian and as a pivotal figure in the creation of modern thought and culture. In his close reading of various of Freud’s theoretical and clinical texts, including two of the most famous case histories, Steven Marcus uncovers the steps in the development of Freud’s thought, the dynamics and contradictions and ‘the intellectual and emotional urgings, forces and conflicts that were at work… as the (...) first original insights and discoveries that constituted the inception of psychoanalysis as a theory, discipline of inquiry, and new kind of therapy, came suddenly, often unexpectedly and without being bidden, upon Freud’. Central to Professor Marcus’ inquiry is the relationship of Freud’s work to cultural change and to the very process of disclosure, formation and construction in the transition to modernity. Freud’s writings, and the psychoanalytic discipline of which they are the foundations, are placed in the context of their contribution to modern modes of thought, and of their influence on our notions of the centres of significance of each existence as a whole. Freud and the Culture of Psychoanalysis is a major contribution to our understanding of how ideas and theories become internalized into the intellectual framework of our lives and affect the way we think about the world. By moving backward and forward from pre-Freudian to post-Freudian thinkers, Professor Marcus takes us on a journey through cultural transition that is also an exploration of how the individual interacts with his own moment in history to forge new modes of consciousness. (shrink)
Metaphysics and language: Quine, W. V. O. On the individuation of attributes. Körner, S. On some relations between logic and metaphysics. Marcus, R. B. Does the principle of substitutivity rest on a mistake? Van Fraassen, B. C. Platonism's pyrrhic victory. Martin, R. M. On some prepositional relations. Kearns, J. T. Sentences and propositions.--Basic and combinatorial logic: Orgass, R. J. Extended basic logic and ordinal numbers. Curry, H. B. Representation of Markov algorithms by combinators.--Implication and consistency: Anderson, A. R. Fitch (...) on consistency. Belnap, N. D., Jr. Grammatical propaedeutic. Thomason, R. H. Decidability in the logic of conditionals. Myhill, J. Levels of implication.--Deontic, epistemic, and erotetic logic: Bacon, J. Belief as relative knowledge. Wu, K. J. Believing and disbelieving. Kordig, C. R. Relativized deontic modalities. Harrah, D. A system for erotetic sentences. (shrink)
Laura Marcus is one of the leading literary critics of modernist literature and culture. Dreams of Modernity: Psychoanalysis, Literature, Cinema covers the period from around 1880 to 1930, when modernity as a form of social and cultural life fed into the beginnings of modernism as a cultural form. Railways, cinema, psychoanalysis and the literature of detection - and their impact on modern sensibility - are four of the chief subjects explored. Marcus also stresses the creativity of modernist women (...) writers, including H. D., Dorothy Richardson and Virginia Woolf. The overriding themes of this work bear on the understanding of the early twentieth century as a transitional age, thus raising the question of how 'the moderns' understood the conditions of their own modernity. (shrink)
Based on her earlier ground-breaking axiomatization of quantified modal logic, the papers collected here by the distinguished philosopher Ruth Barcan Marcus cover much ground in the development of her thought, spanning from 1961 to 1990. The first essay here introduces themes initially viewed as iconoclastic, such as the necessity of identity, the directly referential role of proper names as "tags", the Barcan Formula about the interplay of possibility and existence, and alternative interpretations of quantification. Marcus also addresses the (...) putative puzzles about substitutivity and about essentialism. The collection also includes influential essays on moral conflict, on belief and rationality, and on some historical figures. Many of her views have been incorporated into current theories, while others remain part of a continuing debate. (shrink)
This collection of Marcus's non-technical essays include her earlier ground-breaking axiomatizations of quantified modal logic, and explore such topics as the necessity of identity, the directly referential role of proper names as "tags", the interplay of possibility and existence, and others viewed as iconoclastic when Marcus first addressed them, but now long incorporated into current discussion.
Introduction -- Rational explanation of belief -- Rational explanation of action -- (Non-human) animals and their reasons -- Rational explanation and rational causation -- Events and states -- Physicalism.
I argue that zombies are inconceivable. More precisely, I argue that the conceivability-intuition that is used to demonstrate their possibility has been misconstrued. Thought experiments alleged to feature zombies founder on the fact that, on the one hand, they _must_ involve first-person imagining, and yet, on the other hand, _cannot_. Philosophers who take themselves to have imagined zombies have unwittingly conflated imagining a creature who lacks consciousness with imagining a creature without also imagining the consciousness it may or may not (...) possess. (shrink)
If a woman in the audience at a presentation raises her hand, we would take this as evidence that she intends to ask a question. In normal circumstances, we would be right to say that she raises her hand because she intends to ask a question. We also expect that there could, in principle, be a causal explanation of her hand’s rising in purely physiological terms. Ordinarily, we take the existence and compatibility of both kinds of causes for granted. But (...) this can come to seem strange. When we imagine tracking the physiological process that culminates in her hand’s rising, it is hard to find a purchase for her intention. The physiological process seems not to need assistance from her intention in order to get where it’s going, chugging along as it does according to principles that appear to have very little in common with ordinary psychological ones. The presumed self-sufficiency of physiological processes can, in a similar fashion, appear to muscle psychological states quite generally out of the causal picture. (shrink)
<b> </b>Abstract: It is generally accepted that the most serious threat to the possibility of mental causation is posed by the causal self-sufficiency of physical causal processes. I argue, however, that this feature of the world, which I articulate in principle I call Completeness, in fact poses no genuine threat to mental causation. Some find Completeness threatening to mental causation because they confuse it with a stronger principle, which I call Closure. Others do not simply conflate Completeness and Closure, but (...) hold that Completeness, together with certain plausible assumptions, _entails_ Closure. I refute the most fully worked-out version of such an argument. Finally, some find Completeness all by itself threatening to mental causation. I argue that one will only find Completeness threatening if one operates with a philosophically distorted conception of mental causation. I thereby defend what I call naïve realism about mental causation. (shrink)
In recent decades, a view of identity I call Sortalism has gained popularity. According to this view, if a is identical to b, then there is some sortal S such that a is the same S as b. Sortalism has typically been discussed with respect to the identity of objects. I argue that the motivations for Sortalism about object-identity apply equally well to event-identity. But Sortalism about event-identity poses a serious threat to the view that mental events are token identical (...) to physical events: A particular mental event m is identical with a particular physical event p only if there is a sortal S such that m and p are both Ss. If there is no such sortal, the doctrine of token-identity is not true. I argue here that we have no good reason for thinking that there is any such sortal. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose (...) an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
The thesis that mental states are physical states enjoys widespread popularity. After the abandonment of typeidentity theories, however, this thesis has typically been framed in terms of state tokens. I argue that token states are a philosopher’s fiction, and that debates about the identity of mental and physical state tokens thus rest on a mistake.
There has been much written in recent years about whether a pair of subjects could have visual experiences that represented the colors of objects in their environment in precisely the same way, despite differing significantly in what it was like to undergo them, differing that is, in their qualitative character. The possibility of spectrum inversion has been so much debated1 in large part because of the threat that it would pose to the more general doctrine of Intentionalism, according to which (...) the representational content of an experience fixes what it. (shrink)
Alternative readings of quantification are considered. The absence of an unequivocal translation into ordinary speech is noted. Some examples are cited which, in the opinion of the author, are a result of equivocal readings of quantification, or unnecessarily restrictive readings which obscure its primary function.
The access problem for mathematics arises from the supposition that the referents of mathematical terms inhabit a realm separate from us. Quine’s approach in the philosophy of mathematics dissolves the access problem, though his solution sometimes goes unrecognized, even by those who rely on his framework. This paper highlights both Quine’s position and its neglect. I argue that Michael Resnik’s structuralist, for example, has no access problem for the so-called mathematical objects he posits, despite recent criticism, since he relies on (...) an indispensability argument. Still, Resnik’s structuralist does not provide an account of our access to traditional mathematical objects, and this may be seen as a problem. (shrink)
Although regular polysemy [e.g. producer for product (John read Dickens) or container for contents (John drank the bottle)] has been extensively studied, there has been little work on why certain polysemy patterns are more acceptable than others. We take an empirical approach to the question, in particular evaluating an account based on rules against a gradient account of polysemy that is based on various radical pragmatic theories (Fauconnier 1985; Nunberg 1995). Under the gradient approach, possible senses become more acceptable as (...) they become more closely related to a word’s default meaning, and the apparent regularity of polysemy is an artefact of having many similarly structured concepts. Using methods for measuring conceptual structure drawn from cognitive psychology, Study 1 demonstrates that a variety of metrics along which possible senses can be related to a default meaning, including conceptual centrality, cue validity and similarity, are surprisingly poor predictors of whether shifts to those senses are acceptable. Instead, sense acceptability was better explained by rule-based approaches to polysemy (e.g. Copestake & Briscoe 1995). Study 2 replicated this finding using novel word meanings in which the relatedness of possible senses was varied. However, while individual word senses were better predicted by polysemy rules than conceptual metrics, our data suggested that rules (like producer for product) had themselves arisen to mark senses that, aggregated over many similar words, were particularly closely related. (shrink)
Rogers & McClelland's (R&M's) précis represents an important effort to address key issues in concepts and categorization, but few of the simulations deliver what is promised. We argue that the models are seriously underconstrained, importantly incomplete, and psychologically implausible; more broadly, R&M dwell too heavily on the apparent successes without comparable concern for limitations already noted in the literature.