Edited by Eric Dietrich(State University of New York at Binghamton)
About this topic
Summary
The content of this category can be found in the categories "Can machines think?" and "Machine consciousness."
Key works
The content of this category can be found in the categories "Can machines think?" and "Machine consciousness." See those two for readings and references, also.
Introductions
See the categories "Can machines think?" and "Machine consciousness"
In this paper, I focus on AIs as very different, or at least potentially very different, kinds of language users from what humans are. Is the metasemantics for AI language use different, in the way Cappelen and Dever argue? Is it reasonable to think that AIs will come to use languages importantly different from human languages, what I call alien languages?
In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...) briefly lays out the current state of the science of consciousness and its limitations insofar as these pertain to machine consciousness, and claims that there are no obvious consensus frameworks to inform public opinion on AI consciousness. Section 2 examines the rise of conversational chatbots or Social AI, and argues that in many cases, these elicit strong and sincere attributions of consciousness, mentality, and moral status from users, a trend likely to become more widespread. Section 3 presents an inconsistent triad for theories that attempt to link consciousness, behaviour, and moral status, noting that the trends in Social AI systems will likely make the inconsistency of these three premises more pressing. Finally, Section 4 presents some limited suggestions for how consciousness and AI research communities should respond to the gap between expert opinion and folk judgment. (shrink)
The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...) to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can’t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists. (shrink)
David Chalmers has recently developed a novel strategy of refuting external world skepticism, one he dubs the structuralist solution. In this paper, I make three primary claims: First, structuralism does not vindicate knowledge of other minds, even if it is combined with a functionalist approach to the metaphysics of minds. Second, because structuralism does not vindicate knowledge of other minds, the structuralist solution vindicates far less worldly knowledge than we would hope for from a solution to skepticism. Third, these results (...) suggest that the problem of external world skepticism should perhaps be construed as two different problems, since the problem might turn out to require two substantively different solutions, one for knowledge of the kind that is not dependent on other minds and one for knowledge that is. (shrink)
The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...) investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question. (shrink)
Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case of a (...) social HMI. Analyzing such HMIs, we utilize Dennett’s stance epistemology, and argue that taking either an intentional stance or design stance can be misleading for several reasons, and instead propose to introduce a new stance that is able to capture social HMIs—the AI-stance. (shrink)
As AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances concerning the concrete (...) objects and properties which human language often describes. Language models cannot understand human languages because they perform only linguistic tasks, and therefore cannot represent such objects and properties. However, chatbots may perform tasks concerning the non-linguistic world, so they are better candidates for understanding. Chatbots can also possess the concepts necessary to understand human languages, despite their lack of perceptual contact with the world, due to the language-mediated concept-sharing described by social externalism about mental content. (shrink)
As we await the increasingly likely advent of genuinely intelligent artificial systems, a fair amount of consideration has been given to how we humans will interact with them. Less consideration has been given to how—indeed if—we humans will love them. What would human-AI romantic relationships look like? What do such relationships tell us about the nature of love? This chapter explores these questions via consideration of several works of science fiction, focusing especially on the Black Mirror episode “Be Right Back” (...) and the Spike Jonze's movie *Her*. As I suggest, there may well be cases where it is both possible and appropriate for a human to fall in love with a machine. (shrink)
The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show (...) that it is undecidable by any finite quantum computer. (shrink)
This paper investigates the concept of behavioral autonomy in Artificial Life by drawing a parallel to the use of teleological notions in the study of biological life. Contrary to one of the leading assumptions in Artificial Life research, I argue that there is a significant difference in how autonomous behavior is understood in artificial and biological life forms: the former is underlain by human goals in a way that the latter is not. While behavioral traits can be explained in relation (...) to evolutionary history in biological organisms, in synthetic life forms behavior depends on a design driven by a research agenda, further shaped by broader human goals. This point will be illustrated with a case study on a synthetic life form. Consequently, the putative epistemic benefit of reaching a better understanding of behavioral autonomy in biological organisms by synthesizing artificial life forms is subject to doubt: the autonomy observed in such artificial organisms may be a mere projection of human agency. Further questions arise in relation to the need to spell out the relevant human aims when addressing potential social or ethical implications of synthesizing artificial life forms. (shrink)
Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...) consciousness and self-awareness remains open. In the face of robots, people create their assistants, evolutionary competition with which they will almost certainly lose with the initial data. For successful competition with robots, people will have to change, ceasing to be people in the classical sense. Changing the nature of man will require the emergence of a new – posthuman – anthropology. Conclusions. Against the background of scientific discoveries, technical breakthroughs and everyday improvements of the last decades, an anthropological revolution has taken shape, which made it possible to set the task of creating inhumanly intelligent creatures, as well as changing human nature, up to discussing options for artificial immortality. The history of man ends and the history of the posthuman begins. We can no longer turn off this path, however, in our power to preserve our human qualities in the posthuman future. The theme of the soul again reminded of itself, but from a different perspective – as the theme of consciousness and self-awareness. It became again relevant in connection with the development of computer and cloud technologies, artificial intelligence technologies, etc. If a machine ever becomes a "man", then can a man become a "machine"? However, even if such a hypothetical probability would turn into reality, we cannot talk about any form of individual immortality or about the continuation of existence in a different physical form. A digital copy of the soul will still remain a copy, and I see no fundamental possibility of isolating a substrate-independent mind from the human body. Immortality itself is necessary not so much for stopping someone’s fears or encouraging someone’s hopes, but for the final solution of a religious issue. However, the gods hold the keys to heaven hard and are unlikely to admit our modified descendants there. (shrink)
Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...) are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy. (shrink)
Latest Sermon from the Church of Fundamentalist Naturalism by Pastor Hofstadter. Like his much more famous (or infamous for its relentless philosophical errors) work Godel, Escher, Bach, it has a superficial plausibility but if one understands that this is rampant scientism which mixes real scientific issues with philosophical ones (i.e., the only real issues are what language games we ought to play) then almost all its interest disappears. I provide a framework for analysis based in evolutionary psychology and the work (...) of Wittgenstein (since updated in my more recent writings). -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019). (shrink)
Existe um horizonte à frente. Este horizonte está longe de ser aquele aqui descrito em sua forma, mas talvez o seja em sua essência. O que quero dizer com isso é que existe uma possibilidade de os cérebros positrônicos do título nunca existirem para além das brilhantes mentes que os conceberam na Ficção Científica, mas isto não quer dizer que não existirão sistemas análogos em suas funções, principalmente quanto à racionalidade. A Crítica da Razão Positrônica é um texto que tem (...) uma pretensão de, a partir dos interesses estabelecidos por Immanuel Kant, na Crítica da Razão Pura, tenta determinar o tipo de racionalidade esperada a androides, no caso, que possuam um cérebro positrônico. Repito assim o que escreveu Kant: ”Todo interesse de minha razão (tanto especulativa como prática) concentra-se nas seguintes três interrogações:1.Que posso Saber?2.Que devo fazer?3.Que me é permitido esperar?”1 Portanto, é com uma aproximação por essas perguntas que buscaremos uma racionalidade positrônica, algumas vezes talvez,assumindo a posição lógica por parte dos androides positrônicos. -/- 1KANT, I. Critica da Razão PuraA805/B833 -/- . (shrink)
Computers can mimic human intelligence, sometimes quite impressively. This has led some to claim that, a.) computers can actually acquire intelligence, and/or, b.) the human mind may be thought of as a very sophisticated computer. In this paper I argue that neither of these inferences are sound. The human mind and computers, I argue, operate on radically different principles.
Humans are becoming increasingly dependent on the ‘say-so' of machines, such as computers, smartphones, and robots. In epistemology, knowledge based on what you have been told is called ‘testimony' and being able to give and receive testimony is a prerequisite for engaging in many social roles. Should robots and other autonomous intelligent machines be considered epistemic testifiers akin to those of humans? This chapter attempts to answer this question as well as explore the implications of robot testimony for the criminal (...) justice system. Few are in agreement as to the ‘types' of agents that can provide testimony. The chapter surveys three well-known approaches and shows that on two of these approaches being able to provide testimony is bound up with the possession of intentional mental states. Through a discussion of computational and folk-psychological approaches to intentionality, it is argued that a good case can be made for robots fulfilling all three definitions. (shrink)
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...) as products of engineering would undermine their autonomy and thus responsibility. (shrink)
With their “bottom-up” approach, Holk Cruse and Malte Schilling present a highly intriguing perspective on those mental phenomena that have fascinated humankind since ancient times. Among them are those aspects of our inner lives that are at the same time most salient and yet most elusive: we are conscious beings with complex emotions, thinking and acting in pursuit of various goals. Starting with, from a biological point of view, very basic abilities, such as the ability to move and navigate in (...) an unpredictable environment, Cruse & Schilling have developed, step-by-step, a robotic system with the ability to plan future actions and, to a limited extent, to verbally report on its own internal states. The authors then offer a compelling argument that their system exhibits aspects of various higher-level mental phenomena such as emotion, attention, intention, volition, and even consciousness. The scientific investigation of the mind is faced with intricate problems at a very fundamental, methodological level. Not only is there a good deal of conceptual vagueness and uncertainty as to what the explananda precisely are, but it is also unclear what the best strategy might be for addressing the phenomena of interest. Cruse & Schilling’s bio-robotic “bottom-up” approach is designed to provide answers to such questions. In this commentary, I begin, in the first section, by presenting the main ideas behind this approach as I understand them. In the second section, I turn to an examination of its scope and limits. Specifically, I will suggest a set of constraints on good explanations based on the bottom-up approach. What criteria do such explanations have to meet in order to be of real scientific value? I maintain that there are essentially three such criteria: biological plausibility, adequate matching criteria, and transparency. Finally, in the third section, I offer directions for future research, as Cruse & Schilling’s bottom-up approach is well suited to provide new insights in the domain of social cognition and to explain its relation to phenomena such as language, emotion, and self. (shrink)
Much has been written about the possibility of human trust in robots. In this article we consider a more specific relationship: that of a human follower’s obedience to a social robot who leads through the exercise of referent power and what Weber described as ‘charismatic authority.’ By studying robotic design efforts and literary depictions of robots, we suggest that human beings are striving to create charismatic robot leaders that will either (1) inspire us through their display of superior morality; (2) (...) enthrall us through their possession of superhuman knowledge; or (3) seduce us with their romantic allure. Rejecting a contractarian-individualist approach which presumes that human beings will be able to consciously ‘choose’ particular robot leaders, we build on the phenomenological-social approach to trust in robots to argue that charismatic robot leaders will emerge naturally from our world’s social fabric, without any rational decision on our part. Finally, we argue that the stability of these leader-follower relations will hinge on a fundamental, unresolved question of robotic intelligence: is it possible for synthetic intelligences to exist that are morally, intellectually, and emotionally sophisticated enough to exercise charismatic authority over human beings—but not so sophisticated that they lose the desire to do so? (shrink)
Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, I propose that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. I argue that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and this state is type identical (...) to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific 'emotion circuit', physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state. (shrink)
The term “Contemplative sciences” refers to an interdisciplinary approach to mind that aims at a better understanding of alternative states of consciousness, like those obtained trough deep concentration and meditation, mindfulness and other “superior” or “spiritual” mental states. There is, however, a key discipline missing: artificial intelligence. AI has forgotten its original aims to create intelligent machines that could help us to understand better what intelligence is and is more worried about pragmatical stuff, so almost nobody in the field seems (...) to be interested to join this new effort of contemplative science. In this paper, I would like to accomplish the following: (1) To give a brief description of the field of “contemplative sciences;” (2) To argue why AI should actively join this new paradigm on the study of the mind; and (3) To set up a research program on artificial wisdom: that is to design computational systems that can model at least some relevant aspects of human wisdom. (shrink)
This paper tells the story of a recent laboratory medicine controversy in the Canadian province of Newfoundland and Labrador. During the controversy, a DAKOAutostainer machine was blamed for inaccurate breast cancer test results that led to the suboptimal treatment of many patients. In truth, the machine was not at fault. Using concepts developed by Bruno Latour and Pierre Bourdieu, we document the changing nature of the DAKO machine’s agency before, during, and after the controversy, and we make the ethical argument (...) that treating the machine as a scapegoat was harmful to patients. The mistreatment of patients was directly tied to a misrepresentation of the DAKO machine. The way to avoid both forms of mistreatment would have been to include all humans and nonhumans affected by the controversy in the network of decision-making. (shrink)
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...) any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent. (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and computers are (...) semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
Detractors of Searle’s Chinese Room Argument have arrived at a virtual consensus that the mental properties of the Man performing the computations stipulated by the argument are irrelevant to whether computational cognitive science is true. This paper challenges this virtual consensus to argue for the first of the two main theses of the persons reply, namely, that the mental properties of the Man are what matter. It does this by challenging many of the arguments and conceptions put forth by the (...) systems and logical replies to the Chinese Room, either reducing them to absurdity or showing how they lead, on the contrary, to conclusions the persons reply endorses. The paper bases its position on the Chinese Room Argument on additional philosophical considerations, the foundations of the theory of computation, and theoretical and experimental psychology. The paper purports to show how all these dimensions tend to support the proposed thesis of the persons reply. (shrink)
This paper is a follow-up of the first part of the persons reply to the Chinese Room Argument. The first part claims that the mental properties of the person appearing in that argument are what matter to whether computational cognitive science is true. This paper tries to discern what those mental properties are by applying a series of hypothetical psychological and strengthened Turing tests to the person, and argues that the results support the thesis that the Man performing the computations (...) characteristic of understanding Chinese actually understands Chinese. The supposition that the Man does not understand Chinese has gone virtually unquestioned in this foundational debate. The persons reply acknowledges the intuitive power behind that supposition, but knows that brute intuitions are not epistemically sacrosanct. Like many intuitions humans have had, and later deposed, this intuition does not withstand experimental scrutiny. The second part of the persons reply consequently holds that computational cognitive science is confirmed by the Chinese Room thought experiment. (shrink)
Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges’ scoring of dream report’s negative emotional tone. The current study added word associations to improve the model’s accuracy. Word associations were established using words’ frequency of co-occurrence with their defining words as found in (...) a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% on the negative emotional tone scale, and for the first time reached an accuracy of 77% on the positive scale. (shrink)
The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
It is possible to survey humankind and be proud, even to smile, for we accomplish great things. Art and science are two notable worthy human accomplishments. Consonant with art and science are some of the ways we treat each other. Sacrifice and heroism are two admirable human qualities that pervade human interaction. But, as everyone knows, all this goodness is more than balanced by human depravity. Moral corruption infests our being. Why?
This paper describes a study of the effects of two acts of social intelligence, namely mimicry and social praise, when used by an artificial social agent. An experiment ( N = 50) is described which shows that social praise—positive feedback about the ongoing conversation—increases the perceived friendliness of a chat-robot. Mimicry—displaying matching behavior—enhances the perceived intelligence of the robot. We advice designers to incorporate both mimicry and social praise when their system needs to function as a social actor. Different ways (...) of implementing mimicry and praise by artificial social actors in an ambient persuasive scenario are discussed. (shrink)
This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior (...) realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper. (shrink)
Nicholas Agar has recently argued that it would be irrational for future human beings to choose to radically enhance themselves by uploading their minds onto computers. Utilizing Searle’s argument that machines cannot think, he claims that uploading might entail death. He grants that Searle’s argument is controversial, but he claims, so long as there is a non-zero probability that uploading entails death, uploading is irrational. I argue that Agar’s argument, like Pascal’s wager on which it is modelled, fails, because the (...) principle that we (or future agents) ought to avoid actions that might entail death is not action guiding. Too many actions fall under its scope for the principle to be plausible. I also argue that the probability that uploading entails death is likely to be lower than Agar recognizes. (shrink)
The issue of adequacy of the Turing Test (TT) is addressed. The concept of Turing Interrogative Game (TIG) is introduced. We show that if some conditions hold, then each machine, even a thinking one, loses a certain TIG and thus an instance of TT. If, however, the conditions do not hold, the success of a machine need not constitute a convincing argument for the claim that the machine thinks.
In 1949, the Department of Philosophy at the University of Manchester organized a symposium “Mind and Machine” with Michael Polanyi, the mathematicians Alan Turing and Max Newman, the neurologists Geoff rey Jeff erson and J. Z. Young, and others as participants. Th is event is known among Turing scholars, because it laid the seed for Turing’s famous paper on “Computing Machinery and Intelligence”, but it is scarcely documented. Here, the transcript of this event, together with Polanyi’s original statement and his (...) notes taken at a lecture by Jeff erson, are edited and commented for the fi rst time. Th e originals are in the Regenstein Library of the University of Chicago. Th e introduction highlights elements of the debate that included neurophysiology, mathematics, the mind-body-machine problem, and consciousness and shows that Turing’s approach, as documented here, does not lend itself to reductionism. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is to (...) say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.
This paper revisits the often debated question Can machines think? It is argued that the usual identification of machines with the notion of algorithm has been both counter-intuitive and counter-productive. This is based on the fact that the notion of algorithm just requires an algorithm to contain a finite but arbitrary number of rules. It is argued that intuitively people tend to think of an algorithm to have a rather limited number of rules. The paper will further propose a modification (...) of the above mentioned explication of the notion of machines by quantifying the length of an algorithm. Based on that it appears possible to reconcile the opposing views on the topic, which people have been arguing about for more than half a century. (shrink)