In this short paper I will introduce an idea which, I will argue, presents a fundamental additional challenge to the machine consciousness community. The idea takes the questions surrounding phenomenology, qualia and phenomenality one step further into the realm of intersubjectivity but with a twist, and the twist is this: that an agent’s intersubjective experience is deeply felt and necessarily co-affective; it is enkinaesthetic, and only through enkinaesthetic awareness can we establish the affective enfolding which enables first the perturbation, (...) and then the balance and counter-balance, the attunement and co-ordination of whole-body interaction through reciprocal adaptation. (shrink)
In the field of machine consciousness, it has been argued that in order to build human-like conscious machines, we must first have a computational model of qualia. To this end, some have proposed a framework that supports qualia in machines by implementing a model with three computational areas (i.e., the subconceptual, conceptual, and linguistic areas). These abstract mechanisms purportedly enable the assessment of artificial qualia. However, several critics of the machine consciousness project dispute this possibility. For instance, Searle, (...) in his Chinese room objection, argues that however sophisticated a computational system is, it can never exhibit intentionality; thus, would also fail to exhibit consciousness or any of its varieties. This paper argues that the proposed architecture mentioned above answers the problem posed by Searle, at least in part. Specifically, it argues that we could reformulate Searle’s worries in the Chinese room in terms of the three-stage artificial qualia model. And by doing so, we could see that the person doing all the translations in the room could realize the three areas in the proposed framework. Consequently, this demonstrates the actualization of self-consciousness in machines. (shrink)
In this paper, I reconstruct Robert Nozick's experience machine objection to hedonism about well-being. I then explain and briefly discuss the most important recent criticisms that have been made of it. Finally, I question the conventional wisdom that the experience machine, while it neatly disposes of hedonism, poses no problem for desire-based theories of well-being.
Descartes developed an elaborate theory of animal physiology that he used to explain functionally organized, situationally adapted behavior in both human and nonhuman animals. Although he restricted true mentality to the human soul, I argue that he developed a purely mechanistic (or material) ‘psychology’ of sensory, motor, and low-level cognitive functions. In effect, he sought to mechanize the offices of the Aristotelian sensitive soul. He described the basic mechanisms in the Treatise on man, which he summarized in the Discourse. However, (...) the Passions of the soul contains his most ambitious claims for purely material brain processes. These claims arise in abstract discussions of the functions of the passions and in illustrations of those functions. Accordingly, after providing an intellectual context for Descartes’s theory of the passions, especially by comparison with that of Tho- mas Aquinas, I examine its ‘machine psychology’, including the role of habituation and association. I contend that Descartes put forth what may reasonably be called a ‘psychology’ of the unensouled animal body and, correspondingly, of the human body when the soul does not intervene. He thus conceptually distinguished a mechanistically explicable sensory and motor psychology, common to nonhuman and human animals, from true mentality involving higher cognition and volition and requiring (in his view) an immaterial mind. (shrink)
Robert Nozick's experience machine thought experiment (Nozick's scenario) is widely used as the basis for a ?knockdown? argument against all internalist mental state theories of well-being. Recently, however, it has been convincingly argued that Nozick's scenario should not be used in this way because it elicits judgments marred by status quo bias and other irrelevant factors. These arguments all include alternate experience machine thought experiments, but these scenarios also elicit judgments marred by status quo bias and other irrelevant (...) factors. In this paper, several experiments are conducted in order to create and test a relatively bias-free experience machine scenario. It is argued that if an experience machine thought experiment is used to evaluate internalist mental state theories of well-being, then this relatively bias-free scenario should be used over any of the existing scenarios. Unlike the existing experience machine scenarios, when this new scenario is used to assess internalist mental state theories of well-being, it does not provide strong evidence to refute or endorse them. (shrink)
Prudential hedonism has been beset by many objections, the strength and number of which have led most modern philosophers to believe that it is implausible. One objection in particular, however, is nearly always cited when a philosopher wants to argue that prudential hedonism is implausible—the experience machine objection to hedonism. This paper examines this objection in detail. First, the deductive and abductive versions of the experience machine objection to hedonism are explained. Following this, the contemporary responses to each (...) version of the argument are assessed and the deductive version is argued to be relatively ineffective compared to the abductive version. Then, a taxonomy of the contemporary critical responses to the abductive version is created. Consideration of these responses shows that the abductive version of the objection is fairly powerful, but also that one type of response seems promising against it. This response argues that experience machine thought experiments seem to elicit judgments that are either too biased to be used as evidence for the objection or not obviously in favour of reality. It is argued that only this type of refutation seems likely to convince proponents of the abductive version that the objection is much weaker than they believe it to be. Finally, it is suggested that more evidence is required before anything definitive can be said on the matter. (shrink)
The experience machine was traditionally thought to refute hedonism about welfare. In recent years, however, the tide has turned: many philosophers have argued not merely that the experience machine doesn't rule out hedonism, but that it doesn't count against it at all. I argue for a moderate position between those two extremes: although the experience machine doesn't decisively rule out hedonism, it provides us with some reason to reject it. I also argue for a particular way of (...) using the experience machine to argue against hedonism – one that appeals directly to intuitions about the welfare values of experientially identical lives rather than to claims about what we value or claims about whether we would, or should, plug into the machine. The two issues are connected: the conviction that the experience machine leaves hedonism unscathed is partly due to neglect of the best way to use the experience machine. (shrink)
It is argued that Nozick's experience machine thought experiment does not pose a particular difficulty for mental state theories of well-being. While the example shows that we value many things beyond our mental states, this simply reflects the fact that we value more than our own well-being. Nor is a mental state theorist forced to make the dubious claim that we maintain these other values simply as a means to desirable mental states. Valuing more than our mental states is (...) compatible with maintaining that the impact of such values upon our well-being lies in their impact upon our mental lives. (shrink)
We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.
That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is (...) both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of Machine Ethics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines. (shrink)
Nozick’s Experience Machine thought experiment is generally taken to make a compelling, if not conclusive, case against philosophical hedonism. I argue that it does not and, indeed, that regardless of the results, it cannot provide any reason to accept or reject either hedonism or any other philosophical account of wellbeing since it presupposes preferentism, the desire-satisfaction account ofwellbeing. Preferentists cannot take any comfort from the results of such thought experiments because they assume preferentism and therefore cannot establish it. Neither (...) can anyone else, since only a preferentist should accept the terms of the thought experiment. (shrink)
Brain machine interface (BMI) technology makes direct communication between the brain and a machine possible by means of electrodes. This paper reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades.
This paper is a warning that objections based on thought experiments can be misleading because they may elicit judgments that, unbeknownst to the judger, have been seriously skewed by psychological biases. The fact that most people choose not to plug in to the Experience Machine in Nozick’s (1974) famous thought experiment has long been used as a knock-down objection to hedonism because it is widely thought to show that real experiences are more important to us than pleasurable experiences. This (...) paper argues that the commonplace choice to remain in reality when offered a life in the Experience Machine is best explained by status quo bias – the irrational preference for things to remain the same. An alternative thought experiment, empirical evidence, and discussion of how psychological biases can affect our judgments are provided to support this argument. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their functionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adaptations, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In particular, (...) the analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an opportunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would suggest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to implement novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engineering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engineering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
It is widely held that the Experience Machine is the basis of a serious objection to Hedonistic theories of welfare. It is also widely held that Desire Satisfactionist theories of welfare can readily avoid problems stemming from the Experience Machine. But in this paper, we argue that if the Experience Machine poses a serious problem for Hedonism, it also poses a serious problem for Desire Satisfactionism. We raise two objections to Desire Satisfactionism, each of which relies on (...) the Experience Machine. The first is very much like the well-known Experience Machine objection to Hedonism. The second asks whether someone who accepts Desire Satisfactionism should want to form a desire to plug into the Experience Machine. (shrink)
Robert Nozick's experience machine thought experiment is often considered a decisive refutation of hedonism. I argue that the conclusions we draw from Nozick's thought experiment ought to be informed by considerations concerning the operation of our intuitions about value. First, I argue that, in order to show that practical hedonistic reasons are not causing our negative reaction to the experience machine, we must not merely stipulate their irrelevance (since our intuitions are not always responsive to stipulation) but fill (...) in the concrete details that would make them irrelevant. If we do this, we may see our feelings about the experience machine becoming less negative. Second, I argue that, even if our feelings about the experience machine do not perfectly track hedonistic reasons, there are various reasons to doubt the reliability of our anti-hedonistic intuitions. And finally, I argue that, since in the actual world seeing certain things besides pleasure as ends in themselves may best serve hedonistic ends, hedonism may justify our taking these other things to be intrinsically valuable, thus again making the existence of our seemingly anti-hedonistic intuitions far from straightforward evidence for the falsity of hedonism. (shrink)
Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have (...) on our ‘symbolic order’—on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility. (shrink)
According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to (...) escape the problem (arguably, the only way) is if the robot can be shown to be a moral patient – to deserve a particular moral status. If so, it isn’t clear how functional intentionality could remain plausible (something like “phenomenal intentionality” would be required). Finally, while it would have seemed that a reasonable strategy for establishing the moral status of intelligent machines would be to demonstrate that the machine possessed genuine intentionality, the composition argument suggests that the order of precedence is reversed: The machine must first be shown to possess a particular moral status before it is a candidate for having genuine intentionality. (shrink)
Measurement is said to be the basis of exact sciences as the process of assigning numbers to matter (things or their attributes), thus making it possible to apply the mathematically formulated laws of nature to the empirical world. Mathematics and empiria are best accorded to each other in laboratory experiments which function as what Nancy Cartwright calls nomological machine: an arrangement generating (mathematical) regularities. On the basis of accounts of measurement errors and uncertainties, I will argue for two claims: (...) 1) Both fundamental laws of physics, corresponding to ideal nomological machine, and phenomenological laws, corresponding to material nomological machine, lie, being highly idealised relative to the empirical reality; and also laboratory measurement data do not describe properties inherent to the world independently of human understanding of it. 2) Therefore the naive, representational view of measurement and experimentation should be replaced with a more pragmatic or practice-based view. (shrink)
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
Machines are often employed in Heidegger’s philosophy as instances to illustrate specific features of modern technology. But what is it about machines that allows them to fulfill this role? This essay argues there is a unique ontological force to the machine that can be understood when looking at distinctions between techne and mechane in ancient Greek sources and applying these distinctions to a reading of Heidegger’s early thought on equipment and later thought on poiesis. Especially with respect to Heidegger’s (...) appropriation of Aristotle’s conception of dunamis, it becomes apparent from a Heideggerian perspective that machines provide an increase in capacity to its human users, but only so at a cost. This cost involves a problem of knowledge where the set of operations required in machine use results in the loss of understanding our dependency on being. The essay then concludes with a discussion of how this relation to machinic capacity is not merely pessimistic and deterministic, but indicates what might constitute a free relation to machines. (shrink)
Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out (...) hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. (shrink)
Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of (...) causally interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactic structures or bit patterns as digital virtual machines do. -/- Please note: Click on the title above or the link below to read the paper. I prefer to keep all my papers freely accessible on my web site so that I can correct mistakes and add improvements. -/- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html -/- This is now part of the Meta-Morphogenesis project: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html. (shrink)
Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way (...) to principled comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)
Analogies to machines are commonplace in the life sciences, especially in cellular and molecular biology — they shape conceptions of phenomena and expectations about how they are to be explained. This paper offers a framework for thinking about such analogies. The guiding idea is that machine-like systems are especially amenable to decompositional explanation, i.e., to analyses that tease apart underlying components and attend to their structural features and interrelations. I argue that for decomposition to succeed a system must exhibit (...) causal orderliness, which I explicate in terms of differentiation among parts and the significance of local relations. I also discuss what makes a model depict its target as machine-like, suggesting that a key issue is the degree of detail with respect to the target’s parts and their interrelations. (shrink)
Learning general concepts in imperfect environments is difficult since training instances often include noisy data, inconclusive data, incomplete data, unknown attributes, unknown attribute values and other barriers to effective learning. It is well known that people can learn effectively in imperfect environments, and can manage to process very large amounts of data. Imitating human learning behavior therefore provides a useful model for machine learning in real-world applications. This paper proposes a new, more effective way to represent imperfect training instances (...) and rules, and based on the new representation, a Human-Like Learning (HULL) algorithm for incrementally learning concepts well in imperfect training environments. Several examples are given to make the algorithm clearer. Finally, experimental results are presented that show the proposed learning algorithm works well in imperfect learning environments. (shrink)
In his provocative “Can We Test the Experience Machine?”, Basil Smith argues that we should recognise a limit on experimental philosophy. In this response to Smith, I will argue that his limit does not prevent us from usefully testing most experience machine thought experiments, including De Brigard‟s inverted experience machine scenarios. I will also argue that, if taken seriously, Smith‟s limit has far-reaching consequences for traditional (non-experimental) philosophy as well.
Abstract Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There are hopes (...) it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project --- an artificial intelligence program put into operation in 2010 --- is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects. (shrink)
On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and (...) B. Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)
Inventive Machine project is the matter of discussion. The project aims to develop a family of AI systems for intelligent support of all stages of engineering design.Peculiarities of the IM project:deep and comprehensive knowledge base — the theory of inventive problem solving (TIPS)solving complex problems at the level of inventionsapplication in any area of engineeringstructural prediction of engineering system developmentThe systems of the second generation are described in detail.
We try to show that there is no difference in principle between communicating a piece of information to a human and to a machine. The argumentation depends on the following theses: Communicating is transfer of information; information has propositional form; propositional form can be modelled as categorization; categorisation can be modelled in a machine; a suitably equipped machine can grasp propositional content designed for human communication. What I suggest is that the discussion should focus on the truth (...) and precise meaning of these statements. However, in case these statements are true it follows that: For any act of communication that successfully transfers a piece of information to a human, that act could also transfer that piece of information to a machine. (shrink)
I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.
In this paper we discuss the application of a new machine learning approach – Argument Based Machine Learning – to the legal domain. An experiment using a dataset which has also been used in previous experiments with other learning techniques is described, and comparison with previous experiments made. We also tested this method for its robustness to noise in learning data. Argumentation based machine learning is particularly suited to the legal domain as it makes use of the (...) justifications of decisions which are available. Importantly, where a large number of decided cases are available, it provides a way of identifying which need to be considered. Using this technique, only decisions which will have an influence on the rules being learned are examined. (shrink)
This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.
This paper describes a tool for assisting lawyers and paralegal teams during document review in eDiscovery. The tool combines a machine learning technology (CategoriX) and advanced multi-touch interface capable of not only addressing the usual cost, time and accuracy issues in document review, but also of facilitating the work of the review teams by capitalizing on the intelligence of the reviewers and enabling collaborative work.
Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary (...) random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines. (shrink)
The impression we are often given by historians of philosophy is that the readiness of medieval philosophers to appeal to authorities, such as The Bible, the Church, and Aristotle, was not shared by many early modern philosophers, for whom there was a marked preference to look for illumination via experience, the exercise of reason, or a combination of the two. Although this may be accurate, broadly speaking, it is notable that, in spite of the waning enthusiasm for deferring to traditional (...) authorities, appeals to scripture remained commonplace in the work of early modern philosophers. In order to understand the philosophers of the early modern period, the philosophies they developed, and the debates they fought, we need to understand how they used scripture. This paper is intended to contribute to this desideratum by examining how scripture was used by those who engaged in a particular debate within natural philosophy, the so-called beast-machine controversy of the 17th and 18th centuries. (shrink)
In early modern times it was not uncommon for thinkers to tease out from the nature of God various doctrines of substantial physical and metaphysical import. This approach was particularly fruitful in the so-called beast-machine controversy, which erupted following Descartes’ claim that animals are automata, that is, pure machines, without a spiritual, incorporeal soul. Over the course of this controversy, thinkers on both sides attempted to draw out important truths about the status of animals simply from the notion or (...) attributes of God. Automatists – led by Nicolas Malebranche and Antoine Dilly – developed six such arguments, appealing to divine justice, providence, economy, glory (twice) and wisdom, while opponents to animal automatism developed two arguments, appealing to divine wisdom and goodness. In this article I shall examine the substance of all eight of these arguments, along with their origins, patronage, and variations, and the objections they elicited from opponents, with the aim of determining their suitability for use in contemporary debates about animal sentience and consciousness, and hence their relevance for contemporary philosophers. (shrink)
Can we test philosophical thought experiments, such as whether people would enter an experience machine or would leave one once they are inside? Dan Weijers argues that since 'rational' subjects (e.g. students taking surveys in college classes) are believable, we can do so. By contrast, I argue that because such subjects will probably have the wrong affect (i.e. emotional states) when they are tested, such tests are almost worthless. Moreover, understood as a general policy, such pretend testing would ruin (...) the results of most psychological tests, such as those of helping behavior, attitudes to authority, moral transgressions, etc. However, I also argue that certain philosophical thought experiments do not require us to have as much (or any) affect to understand them, or to elicit intuitions, and so can be tested. Generally, experimental philosophy must adhere to this limit, on pain of offering vacuous results. (shrink)
Intelligence is not a property unique to the human brain; rather it represents a spectrum of phenomena. An understanding of the evolution of intelligence makes it clear that the evolution of machine intelligence has no theoretical limits — unlike the evolution of the human brain. Machine intelligence will outpace human intelligence and very likely will do so during the lifetime of our children. The mix of advanced machine intelligence with human individual and communal intelligence will create an (...) evolutionary discontinuity as profound as the origin of life. It will presage the end of the human species as we know it. The question, in the author's view, is not whether this will happen, but when, and what should be our response. (shrink)
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis (...) of the phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)
John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited (...) by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI. (shrink)
Turing wrote that the “guiding principle” of his investigation into the possibility of intelligent machinery was “The analogy [of machinery that might be made to show intelligent behavior] with the human brain.”  In his discussion of the investigations that Turing said were guided by this analogy, however, he employs a more far-reaching analogy: he eventually expands the analogy from the human brain out to “the human community as a whole.” Along the way, he takes note of an obvious fact (...) in the bigger scheme of things regarding human intelligence: grownups were once children; this leads him to imagine what a machine analogue of childhood might be. In this paper, I’ll discuss Turing’s child-machine, what he said about different ways of educating it, and what impact the “bringing up” of a child-machine has on its ability to behave in ways that might be taken for intelligent. I’ll also discuss how some of the various games he suggested humans might play with machines are related to this approach. (shrink)
Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...) for the ‘machine kingdom’. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as ‘universal psychometrics’, a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent. (shrink)
Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and (...) AAs. In defending this view, I show how James Moor’s model for distinguishing four levels of ethical agents in the context of machine ethics :18–21, 2006) can help us to develop a framework that differentiates four levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: the level of autonomy of the individual AAs involved, the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and the kind of interactions that occur between the HAs and AAs in the trust environments. (shrink)
The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed (...) by the data — how are we to restrict the space of inductive hypotheses and choose effectively some rules that will probably perform well on future examples? We analyze if and how this problem is approached in standard accounts of induction and show the difficulties that are present. Finally, we suggest that the explanation-based learning approach and related methods of knowledge intensive induction could be, if not a solution, at least a tool for solving some of these problems. (shrink)
Human and machine discovery are gradual problem-solving processes of searching large problem spaces for incompletely defined goal objects. Research on problem solving has usually focused on search of an instance space (empirical exploration) and a hypothesis space (generation of theories). In scientific discovery, search must often extend to other spaces as well: spaces of possible problems, of new or improved scientific instruments, of new problem representations, of new concepts, and others. This paper focuses especially on the processes for finding (...) new problem representations and new concepts, which are relatively new domains for research on discovery.Scientific discovery has usually been studied as an activity of individual investigators, but these individuals are positioned in a larger social structure of science, being linked by the blackboard of open publication (as well as by direct collaboration). Even while an investigator is working alone, the process is strongly influenced by knowledge and skills stored in memory as a result of previous social interaction. In this sense, all research on discovery, including the investigations on individual processes discussed in this paper, is social psychology, or even sociology. (shrink)
Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly ‘mechanical’ activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in ‘cultural pruning’ of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.
This paper seeks to understand machine cognition. The nature of machine cognition has been shrouded in incomprehensibility. We have often encountered familiar arguments in cognitive science that human cognition is still faintly understood. This paper will argue that machine cognition is far less understood than even human cognition despite the fact that a lot about computer architecture and computational operations is known. Even if there have been putative claims about the transparency of the notion of machine (...) computations, these claims do not hold out in unraveling machine cognition, let alone machine consciousness (if there is any such thing). The nature and form of machine cognition remains further confused also because of attempts to explain human cognition in terms of computation and to model/simulate (aspects of) human cognitive processing in machines. Given that these problems in characterizing machine cognition persist, a view of machine cognition that aims to avoid these problems is outlined. The argument that is advanced is that something becomes a computation in machines only when a human interprets it, which is a kind of semiotic causation. From this it follows that a computing machine is not engaged in a computation unless a human interprets what it is doing; instead, it is engaged in machine cognition, which is defined as a member or subset of the set of all possible mappings of inputs to outputs. The human interpretation, which is a semiotic process, gives meaning to what a machine does, and then what it does becomes a computation. (shrink)