If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...) evaluating such proposals. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
While it is often said that robotics should aspire to reproduc- ible and measurable results that allow benchmarking, I argue that a fo- cus on benchmarking can be a hindrance for progress in robotics. The reason is what I call the ‘measure-target confusion’, the confusion be- tween a measure of progress and the target of progress. Progress on a benchmark (the measure) is not identical to scientific or technological progress (the target). In the past, several academic disciplines have been led (...) into pursuing only reproducible and measurable ‘scientific’ results – robotics should be careful to follow that line because results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems for a broad variety of contexts. While it is extremely valuable to improve benchmarks to reduce the distance be- tween measure and target, the general problem to measure progress towards more intelligent machines (the target) will not be solved by benchmarks alone; we need a balanced approach with sophisticated benchmarks, plus real-life testing, plus qualitative judgment. (shrink)
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...) fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...) propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of ”soft” bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. (shrink)
We discuss at some length evidence from the cognitive science suggesting that the representations of objects based on spatiotemporal information and featural information retrieved bottomup from a visual scene precede representations of objects that include conceptual information. We argue that a distinction can be drawn between representations with conceptual and nonconceptual content. The distinction is based on perceptual mechanisms that retrieve information in conceptually unmediated ways. The representational contents of the states induced by these mechanisms that are available to a (...) type of awareness called phenomenal awareness constitute the phenomenal content of experience. The phenomenal content of perception contains the existence of objects as separate things that persist in time and time, spatiotemporal information, and information regarding relative spatial relations, motion, surface properties, shape, size, orientation, color, and their functional properties. (shrink)
While the 2010 EPSRC principles for robotics state a set of 5 rules of what ‘should’ be done, I argue they should differentiate between legal obligations and ethical demands. Only if we make this difference can we state clearly what the legal obligations already are, and what additional ethical demands we want to make. I provide suggestions how to revise the rules in this light and how to make them more structured.
The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...) consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
From a European perspective the US debate about gun control is puzzling because we have no such debate: It seems obvious to us that dangerous weapons need tight control and that ‘guns’ fall under that category. I suggest that this difference occurs due to different habits that generate different attitudes and support this explanation with an analogy to the habits about knives. I conclude that it is plausible that individual knife-people or gun-people do not want tight regulatory legislation—but tight knife (...) and gun legislation is morally obligatory anyway. We need to give up our habits for the greater good. (shrink)
The paper discusses the extended mind thesis with a view to the notions of “agent” and of “mind”, while helping to clarify the relation between “embodiment” and the “extended mind”. I will suggest that the extended mind thesis constitutes a reductio ad absurdum of the notion of ‘mind’; the consequence of the extended mind debate should be to drop the notion of the mind altogether – rather than entering the discussion how extended it is.
Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come (...) to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agents. (shrink)
The paper argues that the reference of perceptual demonstratives is fixed in a causal nondescriptive way through the nonconceptual content of perception. That content consists first in spatiotemporal information establishing the existence of a separate persistent object retrieved from a visual scene by the perceptual object segmentation processes that open an object-file for that object. Nonconceptual content also consists in other transducable information, that is, information that is retrieved directly in a bottom-up way from the scene (motion, shape, etc). The (...) nonconceptual content of the mental states induced when one uses a perceptual demonstrative constitutes the mode of presentation of the perceptual demonstrative that individuates but does not identify the object of perceptual awareness and allows reference to it. On that account, perceptual demonstratives put us in a de re relationship with objects in the world through the non-conceptual information retrieved directly from the objects in the environment. (shrink)
The theory that all processes in the universe are computational is attractive in its promise to provide an understandable theory of everything. I want to suggest here that this pancomputationalism is not sufficiently clear on which problem it is trying to solve, and how. I propose two interpretations of pancomputationalism as a theory: I) the world is a computer and II) the world can be described as a computer. The first implies a thesis of supervenience of the physical over computation (...) and is thus reduced ad absurdum. The second is underdetermined by the world, and thus equally unsuccessful as theory. Finally, I suggest that pancomputationalism as metaphor can be useful. – At the Paderborn workshop in 2008, this paper was presented as a commentary to the relevant paper by Gordana Dodig-Crnkovic " Info-Computationalism and Philosophical Aspects of Research in Information Sciences". (shrink)
Lethal Autonomous Weapon Systems are here. Technological development will see them become widespread in the near future. This is in a matter of years rather than decades. When the UN Convention on Certain Conventional Weapons meets on 10-14th November 2014, well-considered guidance for a decision on the general policy direction for LAWS is clearly needed. While there is widespread opposition to LAWS—or ‘killer robots’, as they are popularly called—and a growing campaign advocates banning them outright, we argue the opposite. LAWS (...) may very well reduce suffering and death in war. Rather than banning them, they should be regulated, to ensure both compliance with international humanitarian law, and that this positive outcome occurs. This policy memo sets out the basic structure and content of the regulation required. (shrink)
The declared goal of this paper is to fill this gap: “... cognitive systems research needs questions or challenges that define progress. The challenges are not (yet more) predictions of the future, but a guideline to what are the aims and what would constitute progress.” – the quotation being from the project description of EUCogII, the project for the European Network for Cognitive Systems within which this formulation of the ‘challenges’ was originally developed (http://www.eucognition.org). So, we stick out our neck (...) and formulate the challenges for artificial cognitive systems. These challenges are articulated in terms of a definition of what a cognitive system is: a system that learns from experience and uses its acquired knowledge (both declarative and practical) in a flexible manner to achieve its own goals. (shrink)
There is much discussion about whether the human mind is a computer, whether the human brain could be emulated on a computer, and whether at all physical entities are computers (pancomputationalism). These discussions, and others, require criteria for what is digital. I propose that a state is digital if and only if it is a token of a type that serves a particular function - typically a representational function for the system. This proposal is made on a syntactic level, assuming (...) three levels of description (physical, syntactic, semantic). It suggests that being digital is a matter of discovery or rather a matter of how we wish to describe the world, if a functional description can be assumed. Given the criterion provided and the necessary empirical research, we should be in a position to decide on a given system (e.g. the human brain) whether it is a digital system and can thus be reproduced in a different digital system (since digital systems allow multiple realization). (shrink)
I see four symbol grounding problems: 1) How can a purely computational mind acquire meaningful symbols? 2) How can we get a computational robot to show the right linguistic behavior? These two are misleading. I suggest an 'easy' and a 'hard' problem: 3) How can we explain and re-produce the behavioral ability and function of meaning in artificial computational agents?4) How does physics give rise to meaning?
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which it is a part. In the case or digital (...) representations, to be a token of a representational type, where the function of the type is to represent. [An earlier version of this paper was discussed in the web-conference "Interdisciplines" https://web.archive.org/web/20100221125700/http://www.interdisciplines.org/adaptation/papers/7 ]. (shrink)
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
Epistemic theories of truth, such as those presumed to be typical for anti-realism, can be characterised as saying that what is true can be known in principle: p → ◊Kp. However, with statements of the form “p & ¬Kp”, a contradiction arises if they are both true and known. Analysis of the nature of the paradox shows that such statements refute epistemic theories of truth only if the the anti-realist motivation for epistemic theories of truth is not taken into account. (...) The motivation in a link of understandability ans meaningful- ness suggests to change the above principle and to restrict the theory to logically simple sentences, in which case the paradox does not arise. This suggestion also allows to see the deep philosophical problems for anti-realism those counterexamples are pointing at. (shrink)
I want to suggest that the major influence of classical arguments for embodiment like "The Embodied Mind" by Varela, Thomson & Rosch (1991) has been a changing of positions rather than a refutation: Cognitivism has found ways to retreat and regroup at positions that have better fortification, especially when it concerns theses about artificial intelligence or artificial cognitive systems. For example: a) Agent-based cognitivism' that understands humans as taking in representations of the world, doing rule-based processing and then acting on (...) them (sense-plan-act) is often limited to conscious decision processes; and b) Purely syntactic cognition is compatible with embodiment, or supplemented by embodiment (e.g. for 'grounding'). While the empirical thesis of embodied cognition ('embodied cognitive science') is true and the practical engineering thesis ('morphological computation', 'cheap design') is often true, the conceptual thesis ('embodiment is necessary for cognition') is likely false - syntax is often enough for cognition, unless grounding is really necessary. I conclude that it has become more sensible to integrate embodiment with traditional approaches rather than "fight for embodiment" or "against cognitivism". (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
Review of: "Computation, Information, Cognition: The Nexus and the Liminal", Ed. Susan Stuart & Gordana Dodig Crnkovic, Newcastle: Cambridge Scholars Publishing, September 2007, xxiv+340pp, ISBN: 9781847180902, Hardback: £39.99, $79.99 ---- Are you a computer? Is your cat a computer? A single biological cell in your stomach, perhaps? And your desk? You do not think so? Well, the authors of this book suggest that you think again. They propose a computational turn, a turn towards computational explanation and towards the explanation of (...) computation itself. The explanation of computation is the core of the present volume, but the computational turn to regard a wide variety of systems as computational is a potentially very wide-ranging project. (shrink)
The dialogue develops arguments for and against a broad new world system - info-computationalist naturalism - that is supposed to overcome the traditional mechanistic view. It would make the older mechanistic view into a special case of the new general info-computationalist framework (rather like Euclidian geometry remains valid inside a broader notion of geometry). We primarily discuss what the info-computational paradigm would mean, especially its pancomputationalist component. This includes the requirements for a the new generalized notion of computing that would (...) include sub-symbolic information processing. We investigate whether pancomputationalism can provide the basic causal structure to the world and whether the overall research program of info-computationalist naturalism appears productive, especially when it comes to new approaches to the living world, including computationalism in the philosophy of mind. (shrink)
1 Οι Αρχές - 2 Η δοκιμασία του Turing - 3 Η κλασική τεχνητή νοημοσύνη - 4 Η τεχνητή νοημοσύνη σήμερα - 5 Η τεχνητή νοημοσύνη του μέλλοντος - Με τις τεχνολογίες του παρόντος μάλλον θα δυσκολευτούμε να φτάσουμε στην κατασκευή μηχανών με τεχνητή νοημοσύνη. Κατά την γνώμη μου, θα δούμε άλλες τεχνικές λύσεις με την κλασική τεχνητή νοημοσύνη και μέθοδο «από κάτω προς τα πάνω», αλλά δεν περιμένω να υπάρξει ριζοσπαστική πρόοδος πριν μάθουμε πολλά παραπάνω για τον εγκέφαλό μας. (...) Υπάρχουν πολύ καλοί λόγοι για να λέμε ότι το μυαλό μας δεν είναι υπολογιστής και δεν περιμένω να μπορούμε να φτιάχνουμε νόηση μόνο με υπολογιστή. Αλλά θα μπορούσαμε με άλλα μηχανήματα, γιατί όχι;. (shrink)
Should we do speculative cognitive science? - In present day philosophy, I see a fashion that uses empirical facts (data) to support positions that are not philosophical but empirical in nature. The argumentative structure is classical philosophy, saying that ‘this has to be that way because …’ where the ‘this’ refers to some empirical state of affairs. This kind of philosophy speculates about empirical facts in areas where we do not yet know the facts – the arguments are a priori, (...) supported by a posteriori data. This is precisely what the speculative philosophy of German Idealism was doing, e.g. in the works of Schelling or Hegel. (shrink)
This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des drones qui sont manoeuvrés à distance (...) mais comportent une part importante de discernement humain, il s’agit de machines programmées pour défendre, attaquer, ou tuer de manière autonome. Les auteurs, philosophes, préfèrent prévenir de leur prochaine diffusion et obtenir des Nations Unies leur régulation. Une campagne internationale propose plutôt leur interdiction. (shrink)
Immanuel Kant famously defined philosophy to be about three questions: “What can I know? What should I do? What can I hope for?” (KrV, B833). I want to suggest that the three questions of our course on the philosophy of computing are: What is computing? What should we do with computing? What could computing do?
Die Welt ist voller Leid. Gott ist entweder unfähig, es zu verhindern – dann ist Er nicht allmächtig –, oder Er will es nicht verhindern – dann ist Er nicht vollkommen gut. Seit Generationen wird dies als das schlagendste Argument gegen den Glauben angesehen, daß ein allmächtiges und allgütiges Wesen existiert. Natürlich haben Theisten sich die größte Mühe gegeben, eine angemessene Erwiderung vorzubringen. ... Selbst wenn nur ein einziges Individuum unnötigerweise für einen kurzen Moment eine leichte Unannehmlichkeit zu ertragen hätte, (...) wäre das Problem logisch ebenso real – obwohl es nicht so schmerzhaft wäre oder möglicherweise sogar unbemerkt bliebe. (shrink)
In den Abschnitten X und XI der Dialoge über Natürliche Religion legt Hume seine Ansichten zum traditionellen theologischen Problem des Übels dar. Humes Anmerkungen zu diesem Thema scheinen mir eine reichhaltige Mischung aus Einsichten und Irrtümern zu enthalten. Mein Ziel in diesem Aufsatz besteht darin, diese entgegengesetzten Elemente seiner Diskussion zu entwirren.
"Data mining is not an invasion of privacy because access to data is only by machines, not by people": this is the argument that is investigated here. The current importance of this problem is developed in a case study of data mining in the USA for counterterrorism and other surveillance purposes. After a clarification of the relevant nature of privacy, it is argued that access by machines cannot warrant the access to further information, since the analysis will have to be (...) made either by humans or by machines that understand. It concludes that the current data mining violates the right to privacy and should be subject to the standard legal constraints for access to private information by people. (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy of (...) an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
The background to this paper is that in our world of massively increasing personal digital data any control over the data about me seems illusionary – informational privacy seems a lost cause. On the other hand, the production of this digital data seems a necessary component of our present life in the industrialized world. A framework for a resolution of this apparent dilemma is provided if by the distinction between (meaningless) data and (meaningful) information. I argue that computational data processing (...) is necessary for many present-day processes and not a breach of privacy, while collection and processing of private information is often not necessary and a breach of privacy. The problem and the sketch of its solution are illustrated in a case-study: supermarket customer cards. (shrink)
Just as AI has moved away from classical AI, human-computer interaction (HCI) must move away from what I call ‘good old fashioned HCI’ to ‘new HCI’ – it must become a part of cognitive systems research where HCI is one case of the interaction of intelligent agents (we now know that interaction is essential for intelligent agents anyway). For such interaction, we cannot just ‘analyze the data’, but we must assume intentions in the other, and I suggest these are largely (...) recognized through resistance to carrying out one’s own intentions. This does not require fully cognitive agents but can start at a very basic level. New HCI integrates into cognitive systems research and designs intentional systems that provide resistance to the human agent. (shrink)
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified such (...) that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
This paper investigates a problem about freedom of information. Although freedom of information is generally considered desirable, there are a number of areas where there is substantial agreement that freedom of information should be limited. After a certain ordering of the landscape, I argue that we need to add the category of "dangerous" information and that this category has gained a new quality in the context of current information technology, specifically the Internet. This category includes information the use of which (...) would be morally wrong as well as some of what may be called "corrupting" information. Some such information should not be spread at all and some should be very limited in its spread. (shrink)
Hilary Putnams Biographie und philosophische Entwicklung spiegeln die Geschichte der angelsächsischen Philosophie in den letzten 40 Jahren. Beinahe ebenso lange hat Putnam diese Geschichte wesentlich beeinflußt und so kann John Passmore über Putnam schreiben: «Er ist die Geschichte der gegenwärtigen Philosophie im Umriß»1. In der vorliegenden Einleitung soll vor allem der Kontext dargestellt werden, in dem Putnam steht und aus dem heraus verständlich wird, was er philosophisch zu sagen hat. Dieser Kontext ist sicherlich ein Grund dafür, daß Putnam hierzulande noch (...) relativ wenig bekannt ist, während er in den USA häufig für den bedeutendsten aktiven Philosophen gehalten wird. Im Rahmen einer Skizze von Putnams philosophischer Entwicklung soll zudem eine vorläufige philosophiehistorische Einordnung versucht werden, auch wenn hier nicht der Ort für eine umfassende Kritik oder Darstellung sein kann: Die Einleitung muß auf recht elementarem Niveau bleiben und kann eine Lektüre der Texte natürlich nicht ersetzen. Da Putnams Werk sicherlich Teil einer Annäherung von ‹analytischer› und ‹kontinentaler› Philosophie ist, soll bei der Einführung in die hier übersetzten Texte schließlich deutlich werden, was Putnam nicht analytisch orientierten Lesern zu bieten hat. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...) consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
Dieser Aufsatz ist eine kritische und erkundende Diskussion von Plantingas Behauptung, daß bestimmte Aussagen, aus denen evidentermaßen folgt, daß Gott existiert, berechtigterweise basal sein könnten. Im kritischen Abschnitt argumentiere ich dafür, daß es Plantinga nicht gelingt zu zeigen, daß das Kriterium des modernen Fundamentalisten für berechtigte Basalität, dem zufolge solche Aussagen nicht berechtigterweise basal sein können, selbstreferentiell inkohärent oder anderweitig mangelhaft ist. Im erkundenden Abschnitt versuche ich, ein Argument für die Auffassung zu entwickeln, daß solche Aussagen, selbst wenn sie berechtigterweise (...) basal sein könnten, wenn überhaupt, nur selten berechtigterweise basal für intellektuell differenzierte erwachsene Theisten unserer Kultur wären. (shrink)
Wenn Präsident Kennedy nicht erschossen worden wäre, hätte er dann Nordvietnam bombardiert? Das weiß Gott allein. Oder doch nicht? Weiß wenigstens Er, was Kennedy getan hätte? ... Die Jesuiten behaupteten unter anderem, daß viele menschliche Handlungen in dem Sinne frei seien, daß die Ausführenden nicht logisch oder kausal gezwungen seien, sie auszuführen. („Frei“ wird im vorliegenden Aufsatz stets in diesem Sinne verwendet werden.) Wie behält Gott dann die Kontrolle über die menschliche Geschichte? Nicht dadurch, daß Er menschliche Handlungen kausal determiniert, (...) wie die Dominikaner geglaubt zu haben scheinen , sondern indem Er Umstände herbeiführt, von denen Er weiß, daß wir in ihnen freiwillig Seinen Plänen entsprechend handeln werden. (shrink)
In this paper I want to propose an argument to support Jerry Fodor’s thesis (Fodor 1983) that input systems are modular and thus informationally encapsulated. The argument starts with the suggestion that there is a “grounding problem” in perception, i. e. that there is a problem in explaining how perception that can yield a visual experience is possible, how sensation can become meaningful perception of something for the subject. Given that visual experience is actually possible, this invites a transcendental argument (...) that explains the conditions of its possibility. I propose that one of these conditions is the existence of a visual module in Fodor’s sense that allows the step from sensation to object-identifying perception, thus enabling visual experience. It seems to follow that there is informationally encapsulated nonconceptual content in visual perception. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. (...) In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)