There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...) crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
We discuss at some length evidence from the cognitive science suggesting that the representations of objects based on spatiotemporal information and featural information retrieved bottomup from a visual scene precede representations of objects that include conceptual information. We argue that a distinction can be drawn between representations with conceptual and nonconceptual content. The distinction is based on perceptual mechanisms that retrieve information in conceptually unmediated ways. The representational contents of the states induced by these mechanisms that are available to a (...) type of awareness called phenomenal awareness constitute the phenomenal content of experience. The phenomenal content of perception contains the existence of objects as separate things that persist in time and time, spatiotemporal information, and information regarding relative spatial relations, motion, surface properties, shape, size, orientation, color, and their functional properties. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...) volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
The theory that all processes in the universe are computational is attractive in its promise to provide an understandable theory of everything. I want to suggest here that this pancomputationalism is not sufficiently clear on which problem it is trying to solve, and how. I propose two interpretations of pancomputationalism as a theory: I) the world is a computer and II) the world can be described as a computer. The first implies a thesis of supervenience of the physical over computation (...) and is thus reduced ad absurdum. The second is underdetermined by the world, and thus equally unsuccessful as theory. Finally, I suggest that pancomputationalism as metaphor can be useful. – At the Paderborn workshop in 2008, this paper was presented as a commentary to the relevant paper by Gordana Dodig-Crnkovic " Info-Computationalism and Philosophical Aspects of Research in Information Sciences". (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy of (...) an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...) the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...) the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which it is a part. In the case or digital (...) representations, to be a token of a representational type, where the function of the type is to represent. [An earlier version of this paper was discussed in the web-conference "Interdisciplines" https://web.archive.org/web/20100221125700/http://www.interdisciplines.org/adaptation/papers/7 ]. (shrink)
Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...) erweisen sich als machbar. Diese Annahmen werden sich als dubios erweisen. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...) fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...) propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of ”soft” bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. (shrink)
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. (...) In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
The dialogue develops arguments for and against a broad new world system - info-computationalist naturalism - that is supposed to overcome the traditional mechanistic view. It would make the older mechanistic view into a special case of the new general info-computationalist framework (rather like Euclidian geometry remains valid inside a broader notion of geometry). We primarily discuss what the info-computational paradigm would mean, especially its pancomputationalist component. This includes the requirements for a the new generalized notion of computing that would (...) include sub-symbolic information processing. We investigate whether pancomputationalism can provide the basic causal structure to the world and whether the overall research program of info-computationalist naturalism appears productive, especially when it comes to new approaches to the living world, including computationalism in the philosophy of mind. (shrink)
The declared goal of this paper is to fill this gap: “... cognitive systems research needs questions or challenges that define progress. The challenges are not (yet more) predictions of the future, but a guideline to what are the aims and what would constitute progress.” – the quotation being from the project description of EUCogII, the project for the European Network for Cognitive Systems within which this formulation of the ‘challenges’ was originally developed (http://www.eucognition.org). So, we stick out our neck (...) and formulate the challenges for artificial cognitive systems. These challenges are articulated in terms of a definition of what a cognitive system is: a system that learns from experience and uses its acquired knowledge (both declarative and practical) in a flexible manner to achieve its own goals. (shrink)
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified such (...) that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
I see four symbol grounding problems: 1) How can a purely computational mind acquire meaningful symbols? 2) How can we get a computational robot to show the right linguistic behavior? These two are misleading. I suggest an 'easy' and a 'hard' problem: 3) How can we explain and re-produce the behavioral ability and function of meaning in artificial computational agents?4) How does physics give rise to meaning?
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come (...) to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agents. (shrink)
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
There is much discussion about whether the human mind is a computer, whether the human brain could be emulated on a computer, and whether at all physical entities are computers (pancomputationalism). These discussions, and others, require criteria for what is digital. I propose that a state is digital if and only if it is a token of a type that serves a particular function - typically a representational function for the system. This proposal is made on a syntactic level, assuming (...) three levels of description (physical, syntactic, semantic). It suggests that being digital is a matter of discovery or rather a matter of how we wish to describe the world, if a functional description can be assumed. Given the criterion provided and the necessary empirical research, we should be in a position to decide on a given system (e.g. the human brain) whether it is a digital system and can thus be reproduced in a different digital system (since digital systems allow multiple realization). (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...) consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
The paper discusses the extended mind thesis with a view to the notions of “agent” and of “mind”, while helping to clarify the relation between “embodiment” and the “extended mind”. I will suggest that the extended mind thesis constitutes a reductio ad absurdum of the notion of ‘mind’; the consequence of the extended mind debate should be to drop the notion of the mind altogether – rather than entering the discussion how extended it is.
In this paper I want to propose an argument to support Jerry Fodor’s thesis (Fodor 1983) that input systems are modular and thus informationally encapsulated. The argument starts with the suggestion that there is a “grounding problem” in perception, i. e. that there is a problem in explaining how perception that can yield a visual experience is possible, how sensation can become meaningful perception of something for the subject. Given that visual experience is actually possible, this invites a transcendental argument (...) that explains the conditions of its possibility. I propose that one of these conditions is the existence of a visual module in Fodor’s sense that allows the step from sensation to object-identifying perception, thus enabling visual experience. It seems to follow that there is informationally encapsulated nonconceptual content in visual perception. (shrink)
"Data mining is not an invasion of privacy because access to data is only by machines, not by people": this is the argument that is investigated here. The current importance of this problem is developed in a case study of data mining in the USA for counterterrorism and other surveillance purposes. After a clarification of the relevant nature of privacy, it is argued that access by machines cannot warrant the access to further information, since the analysis will have to be (...) made either by humans or by machines that understand. It concludes that the current data mining violates the right to privacy and should be subject to the standard legal constraints for access to private information by people. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...) consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
Immanuel Kant famously defined philosophy to be about three questions: “What can I know? What should I do? What can I hope for?” (KrV, B833). I want to suggest that the three questions of our course on the philosophy of computing are: What is computing? What should we do with computing? What could computing do?
This volume brings together the advanced research results obtained by the European COST Action 2102: “Cross Modal Analysis of Verbal and Nonverbal Communication”. The research published in this book was discussed at the 3rd joint EUCOGII-COST 2102 International Training School entitled “Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues ” and held in Caserta, Italy, on March 15-19, 2010.
Einleitung 1 -/- Kritik des Positivismus: Realismus «Was kann ich wissen?» 1 Erklärung und Referenz (1973) 1 2 Sprache und Wirklichkeit (1975) 38 3 Was ist ‹Realismus›? (1975) 77 -/- Der dritte Weg: Interer Realismus statt metaphysischem Realismus oder Positivismus 4 Modelle und Wirklichkeit (1980) 112 5 Referenz und Wahrheit (1980) 159 6 Wie man zugleich interner Realist und transzendentaler Idealist sein kann (1980) 191 7 Warum es keine Fertigwelt gibt (1982) 218 -/- Auf des Messers Schneide: Interner Realismus und (...) Relativismus 8 Wozu die Philosophen? (1986) 259 9 Realismus mit menschlichem Antlitz (1988/90) 284 10 Irrealismus und Dekonstruktion (1992) 330 -/- Bibliographie der Schriften von Hilary Putnam 363 I Bücher 363 II Aufsätze, Vorträge und Vorlesungen 365 III Übersetzungen ins Deutsche 385 -/- Literaturverzeichnis 387 Register 413. (shrink)
Two large lexicological projects for the Center for the Greek Language, Thessaloniki, were to be published in print and on the WWW, which meant that two conversions were needed: a near-database file had to be converted to fully formatted file for printing and a fully formatted file had to be converted to a database for WWW access. As it turned out, both conversions could make use of existing clues that indicated the kinds of information contained in each particular piece of (...) text, thus separating fields from each other and ordering them into a tree-like structure. This indicates that both forms of the dictionaries, print and database, stem from the same cognitive need to categorize information into a kind of information before further understanding – be this for a human reader or for a machine. (shrink)
Die gegenwärtig unter dem Titel ›Realismus‹ geführten Debatten in der Philosophie befinden sich nach allgemeiner Ansicht in einem Zustand größter Verwirrung, so daß es nützlich erscheint, ein wenig Ordnung in die theoretischen Optionen zu bringen bevor man für die eine oder andere Auffassung Partei ergreift. In der vorliegenden Arbeit wird dafür argumentiert, daß sich ein systematisch zusammenhängendes Zentrum dieser Debatten mit Hilfe des Begriffes der Referenz ordnen läßt. Nach der Analyse einiger klassischer Positionen soll ein Rahmen erstellt werden, innerhalb dessen (...) die Positionen eingeordnet und die zentralen Probleme fruchtbar diskutiert werden können. Zu diesem Zwecke ist es erforderlich, für die Einordnung der Positionen theoretische Kriterien zu benennen, die sich an den Problemen orientieren von denen hier argumentiert wird, das sie zentral seien. Sehr knapp ausgedrückt wird hier die Auffassung vertreten, Realismus sei als metaphysische These aufzufassen, welche eine von menschlichen epistemischen und semantischen Möglichkeiten unabhängige Existenz behauptet; in den gegenwärtigen Debatten typischerweise die Existenz einer Art von Dingen, nicht die eines individuellen Gegenstandes. Diese metaphysische These wird jedoch in der gegenwärtigen Debatte mit semantischen Argumenten untermauert, bzw. angegriffen und jene semantischen Argumente wiederum verwenden epistemische Erwägungen – die Frage betreffend, was man wissen kann und was nicht. Der Beginn der gegenwärtigen Realismusdebatten mit der soeben skizzierten zentralen Stellung semantischer Argumente ist, der hier vertretenen Auffassung zufolge, die Kritik am traditionellen Fregeschen Referenzbegriff durch Kripke und den frühen Putnam um 1970 (Kap. 2). Aus dieser neu zu bewertenden Kritik und der von den Autoren daraus entwickelten externalistischen realistischen Semantik für Artausdrücke läßt sich das erste Kriterium für eine Position in einer Realismusdebatte ableiten und klären (Kap. 3): Hält man die fragliche Art für eine natürliche Art und meint also, sie habe ihren ›Zusammenhalt‹ von Natur aus? Wenn ja, dann ist die fragliche Position eine realistische (das Kriterium ist dank des engen Zusammenhangs von Realismus und Externalismus sowohl hinreichend als auch notwendig). Hält man die Art nicht für eine natürliche Art, ist man Antirealist. Damit geht jeweils not- wendig eine bestimmte Semantik für den auf die Art referierenden Ausdruck einher. Es zeigt sich zugleich, daß es zwei Varianten des Realismus zu unterscheiden gilt, die hier als klassischer bzw. als moderater Realismus bezeichnet werden. Im folgenden (Kap. 4.1) wird argumentiert, daß der Begriff der Wahrheit nicht eigentlich der zentrale Punkt in den in Frage stehenden Realismusdebatten sein sollte, wie vielfach behauptet wurde, sondern seine Brisanz vielmehr aus zugrundeliegenden semantischen Fragen gewinnt die also in der weiteren Aufdeckung der Kriterien die zentrale Rolle spielen müssen. In der Analyse der Kritik an der in Kap. 3 entwickelten Position des ›klassischen Realismus‹ läßt sich ein zweites unterscheidendes Kriterium für Positionen in Realismusdebatten entwickeln: begriffliche Relativität. Nach der Ablehnung von Putnams Auffassungen zu diesem Thema, werden zwei Varianten vorgeschlagen, starke und schwache begriffliche Relativität (Kap. 4.2). Die Anwendung dieses Merkmals zwingt, so wird argumentiert, in einigen Fällen zu einer Kombination einer realistischen Auffassung von Arten als natürliche mit begrifflicher Relativität. Diese mittlere Position zwischen klassischem Realismus und Antirealismus wird als »moderater Realismus« bezeichnet. Der im ersten Kriterium verwendete Begriff der natürlichen Art, und damit die Optionen in den Debatten, wird schließlich mittels einer Dis- kussion des Phänomens der Vagheit noch weiter verdeutlicht (Kap. 4.3). Ab- schließend werden die drei Optionen zusammenfassend dargestellt und ein Versuch unternommen, ihre Fruchtbarkeit für diverse Realismusdebatten anzudeuten. (shrink)
Wenn Präsident Kennedy nicht erschossen worden wäre, hätte er dann Nordvietnam bombardiert? Das weiß Gott allein. Oder doch nicht? Weiß wenigstens Er, was Kennedy getan hätte? ... Die Jesuiten behaupteten unter anderem, daß viele menschliche Handlungen in dem Sinne frei seien, daß die Ausführenden nicht logisch oder kausal gezwungen seien, sie auszuführen. („Frei“ wird im vorliegenden Aufsatz stets in diesem Sinne verwendet werden.) Wie behält Gott dann die Kontrolle über die menschliche Geschichte? Nicht dadurch, daß Er menschliche Handlungen kausal determiniert, (...) wie die Dominikaner geglaubt zu haben scheinen , sondern indem Er Umstände herbeiführt, von denen Er weiß, daß wir in ihnen freiwillig Seinen Plänen entsprechend handeln werden. (shrink)
In this review, we present some ethical imperatives observed in this pandemic from a data ethics perspective. Our exposition connects recurrent ethical problems in the discipline, such as, privacy, surveillance, transparency, accountability, and trust, to broader societal concerns about equality, discrimination, and justice. We acknowledge data ethics role as significant to develop technological, inclusive, and pluralist societies. - - - Resumen: En esta revisión, exponemos algunos de los imperativos éticos observados desde la ética de datos en esta pandemia. Nuestra exposición (...) busca conectar problemas éticos típicos dentro de esta disciplina, a saber, privacidad, vigilancia, transparencia, responsabilidad y confianza, con preocupaciones a nivel social relacionadas con la igualdad, discrimi nación y justicia. Consideramos que la ética de datos tiene un rol significativo para desarrollar sociedades tecnificadas, inclusivas y pluralistas. (shrink)
The European Association for Cognitive Systems is the association resulting from the EUCog network, which has been active since 2006. It has ca. 1000 members and is currently chaired by Vincent C. Müller. We ran our annual conference on December 08-09 2016, kindly hosted by the Technical University of Vienna with Markus Vincze as local chair. The invited speakers were David Vernon and Paul F.M.J. Verschure. Out of the 49 submissions for the meeting, we accepted 18 a papers and 25 (...) as posters (after double-blind reviewing). Papers are published here as “full papers” or “short papers” while posters are published here as “short papers” or “abstracts”. Some of the papers presented at the conference will be published in a separate special volume on ‘Cognitive Robot Architectures’ with the journal Cognitive Systems Research. - RC, VCM, YS, MV. (shrink)
What proper role should considerations of risk, particularly to research subjects, play when it comes to conducting research on human enhancement in the military context? We introduce the currently visible military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what we refer to as the ‘Assumption’, which states that the demands for risk-avoidance are higher for enhancement than for therapy. We challenge the Assumption through the introduction of three categories of enhancements (3): therapeutic, preventive, (...) and pure enhancements. This demands a revision of the Assumption (4), alongside which we propose some further general principles bearing on how to balance risks and benefits in the context of military enhancement research. We identify a particular type of therapeutic enhancements as providing a more responsible path to human trials of the relevant interventions than pure enhancement applications. Finally, we discuss some possible objections to our line of thought (5). While acknowledging their potential insights, we ultimately find them to be unpersuasive, at least provided that our proposal is understood as fully non-coercive towards the candidates for such therapeutic enhancement trials. (shrink)
Concordance of the poetic works of Giorgos Seferis which presents all the principal “words” of the texts in an alphabetical list, stating how often each word occurs, giving a precise location and a relevant piece of text for each occurrence. We found ca. 9500 different Greek words in 39000 different occurrences, so our concordance has 50.000 lines of text. The technical procedure required four main steps: text entry and tagging, production of the concordance, correction of the contexts, formatting for print.
There are unusual challenges in ethics for RAS. Perhaps the issue can best be summarised as needing to consider “technically informed ethics”. The technology of RAS raises issues that have an ethical dimension, and perhaps uniquely so due to the possibility of moving human decision-making which is implicitly ethically informed to computer systems. Further, if seeking solutions to these problems – ethically aligned design, to use the IEEE’s terminology – then the solutions must be technically meaningful, capable of realisation, capable (...) of assurance, and suitable as a basis for regulation. Thus, ethics for RAS is a rich, complex multi-disciplinary concern, and perhaps more complex than many other ethical issues facing society today. It is also fast-moving. This paper has endeavoured to give an accessible introduction to some of the key issues, noting that many of them are quite subtle, and it is not possible to do them full justice in such a short document. However, we have sought to counterbalance this by giving an extensive list of initiatives, standards, etc. that focus on ethics of RAS and AI, see Annex A. (shrink)
Ποια είναι η πατρίδα κάποιων; Ας δούμε το θέμα λίγο σαν πρόβλημα της φιλοσοφίας της γλώσσας: για ένα κατηγόρημα «Άγγλος» «Γάλλος», «Πορτογάλος», «Βέλγος», «Φλαμανδός», πώς να αποφασίσουμε ποια αντικείμενα (ποιοι άνθρωποι) εμπίπτουν σε ποιο κατηγόρημα; Έχουν τελικά νόημα αυτά τα κατηγορήματα; Η χρήση και κατάχρηση αυτών των κατηγορημάτων έχει αποτελέσει μια από τις κυριότερες πηγές δυστυχίας στη διάρκεια των δύο τελευταίων αιώνων και συνεχίζει να είναι στην ημερήσια διάταξη. Συνήθως εμφανίζεται στα πλαίσια αγώνων για την απόκτηση «ελευθερίας» σε αντιπαράθεση με (...) την «καταπίεση», και σε παγκόσμιο επίπεδο και κοντύτερα μας: στη Μακεδονία, την Καταλονία, την Ιρλανδία, το Κουρδιστάν ... Νομίζω πώς αρκετά από αυτά βασίζονται σε λάθη. (shrink)
1 Οι Αρχές - 2 Η δοκιμασία του Turing - 3 Η κλασική τεχνητή νοημοσύνη - 4 Η τεχνητή νοημοσύνη σήμερα - 5 Η τεχνητή νοημοσύνη του μέλλοντος - Με τις τεχνολογίες του παρόντος μάλλον θα δυσκολευτούμε να φτάσουμε στην κατασκευή μηχανών με τεχνητή νοημοσύνη. Κατά την γνώμη μου, θα δούμε άλλες τεχνικές λύσεις με την κλασική τεχνητή νοημοσύνη και μέθοδο «από κάτω προς τα πάνω», αλλά δεν περιμένω να υπάρξει ριζοσπαστική πρόοδος πριν μάθουμε πολλά παραπάνω για τον εγκέφαλό μας. (...) Υπάρχουν πολύ καλοί λόγοι για να λέμε ότι το μυαλό μας δεν είναι υπολογιστής και δεν περιμένω να μπορούμε να φτιάχνουμε νόηση μόνο με υπολογιστή. Αλλά θα μπορούσαμε με άλλα μηχανήματα, γιατί όχι;. (shrink)
Should we do speculative cognitive science? - In present day philosophy, I see a fashion that uses empirical facts (data) to support positions that are not philosophical but empirical in nature. The argumentative structure is classical philosophy, saying that ‘this has to be that way because …’ where the ‘this’ refers to some empirical state of affairs. This kind of philosophy speculates about empirical facts in areas where we do not yet know the facts – the arguments are a priori, (...) supported by a posteriori data. This is precisely what the speculative philosophy of German Idealism was doing, e.g. in the works of Schelling or Hegel. (shrink)
This book is a new attempt to clarify what is at issue in the contemporary realism debates and to suggest which form the controversies ought to take. Wright has contributed to the-se debates for quite some time and essentially taken the anti-realist side (witness the papers collected in Realism, Meaning and Truth, 1987, 21993, and the forthcoming Realism, Rules and Objectivity, both Oxford: Basil Blackwell). In Truth and Objectivity however, he takes a step back and sketches a neutral ground upon (...) which both sides could agree in or-der to define their oppositions clearly, thus enabling fruitful discussions. His methodolog-ical suggestion for a realism debate in a given assertoric discourse is that both sides should agree on a “minimal” concept of truth for that discourse and then see whether ascent to a more metaphysically substantial concept of truth is warranted, which would constitute a realism for the discourse in question. If Wright had managed to set the agenda in a way that does justice to both sides, this book would have constituted a major contribution to contemporary epistemology and metaphysics. (shrink)
This volume offers very selected papers from the 2014 conference of the “International Association for Computing and Philosophy” (IACAP) - a conference tradition of 28 years. - - - Table of Contents - 0 Vincent C. Müller: - Editorial - 1) Philosophy of computing - 1 Çem Bozsahin: - What is a computational constraint? - 2 Joe Dewhurst: - Computing Mechanisms and Autopoietic Systems - 3 Vincenzo Fano, Pierluigi Graziani, Roberto Macrelli and Gino Tarozzi: - Are Gandy Machines really local? (...) - 4 Doukas Kapantais: - A refutation of the Church-Turing thesis according to some interpretation of what the thesis says - 5 Paul Schweizer: - In What Sense Does the Brain Compute? - 2) Philosophy of computer science & discovery - 6 Mark Addis, Peter Sozou, Peter C R Lane and Fernand Gobet: - Computational Scientific Discovery and Cognitive Science Theories - 7 Nicola Angius and Petros Stefaneas: - Discovering Empirical Theories of Modular Software Systems. An Algebraic Approach. - 8 Selmer Bringsjord, John Licato, Daniel Arista, Naveen Sundar Govindarajulu and Paul Bello: - Introducing the Doxastically Centered Approach to Formalizing Relevance Bonds in Conditionals - 9 Orly Stettiner: - From Silico to Vitro: - Computational Models of Complex Biological Systems Reveal Real-world Emergent Phenomena - 3) Philosophy of cognition & intelligence - 10 Douglas Campbell: - Why We Shouldn’t Reason Classically, and the Implications for Artificial Intelligence - 11 Stefano Franchi: - Cognition as Higher Order Regulation - 12 Marcello Guarini: - Eliminativisms, Languages of Thought, & the Philosophy of Computational Cognitive Modeling - 13 Marcin Miłkowski: - A Mechanistic Account of Computational Explanation in Cognitive Science and Computational Neuroscience - 14 Alex Tillas: - Internal supervision & clustering: - A new lesson from ‘old’ findings? - 4) Computing & society - 15 Vasileios Galanos: - Floridi/Flusser: - Parallel Lives in Hyper/Posthistory - 16 Paul Bello: - Machine Ethics and Modal Psychology - 17 Marty J. Wolf and Nir Fresco: - My Liver Is Broken, Can You Print Me a New One? - 18 Marty J. Wolf, Frances Grodzinsky and Keith W. Miller: - Robots, Ethics and Software – FOSS vs. Proprietary Licenses. (shrink)
We try to show that there is no difference in principle between communicating a piece of information to a human and to a machine. The argumentation depends on the following theses: Communicating is transfer of information; information has propositional form; propositional form can be modelled as categorization; categorisation can be modelled in a machine; a suitably equipped machine can grasp propositional content designed for human communication. What I suggest is that the discussion should focus on the truth and precise meaning (...) of these statements. However, in case these statements are true it follows that: For any act of communication that successfully transfers a piece of information to a human, that act could also transfer that piece of information to a machine. (shrink)