In this short paper I will introduce an idea which, I will argue, presents a fundamental additional challenge to the machine consciousness community. The idea takes the questions surrounding phenomenology, qualia and phenomenality one step further into the realm of intersubjectivity but with a twist, and the twist is this: that an agent’s intersubjective experience is deeply felt and necessarily co-affective; it is enkinaesthetic, and only through enkinaesthetic awareness can we establish the affective enfolding which enables first the perturbation, (...) and then the balance and counter-balance, the attunement and co-ordination of whole-body interaction through reciprocal adaptation. (shrink)
That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is (...) both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of Machine Ethics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines. (shrink)
This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary (...) random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines. (shrink)
The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed (...) by the data — how are we to restrict the space of inductive hypotheses and choose effectively some rules that will probably perform well on future examples? We analyze if and how this problem is approached in standard accounts of induction and show the difficulties that are present. Finally, we suggest that the explanation-based learning approach and related methods of knowledge intensive induction could be, if not a solution, at least a tool for solving some of these problems. (shrink)
Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...) for the ‘machine kingdom’. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as ‘universal psychometrics’, a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent. (shrink)
John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited (...) by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI. (shrink)
Learning general concepts in imperfect environments is difficult since training instances often include noisy data, inconclusive data, incomplete data, unknown attributes, unknown attribute values and other barriers to effective learning. It is well known that people can learn effectively in imperfect environments, and can manage to process very large amounts of data. Imitating human learning behavior therefore provides a useful model for machine learning in real-world applications. This paper proposes a new, more effective way to represent imperfect training instances (...) and rules, and based on the new representation, a Human-Like Learning (HULL) algorithm for incrementally learning concepts well in imperfect training environments. Several examples are given to make the algorithm clearer. Finally, experimental results are presented that show the proposed learning algorithm works well in imperfect learning environments. (shrink)
I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.
Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way (...) to principled comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)
The resolution of ambiguities is one of the central problems for Machine Translation. In this paper we propose a knowledge-based approach to disambiguation which uses Description Logics (dl) as representation formalism. We present the process of anaphora resolution implemented in the Machine Translation systemfast and show how thedl systemback is used to support disambiguation.The disambiguation strategy uses factors representing syntactic, semantic, and conceptual constraints with different weights to choose the most adequate antecedent candidate. We show how these factors (...) can be declaratively represented as defaults inback. Disambiguation is then achieved by determining the interpretation that yields a qualitatively minimal number of exceptions to the defaults, and can thus be formalized as exception minimization. (shrink)
It is argued that Nozick's experience machine thought experiment does not pose a particular difficulty for mental state theories of well-being. While the example shows that we value many things beyond our mental states, this simply reflects the fact that we value more than our own well-being. Nor is a mental state theorist forced to make the dubious claim that we maintain these other values simply as a means to desirable mental states. Valuing more than our mental states is (...) compatible with maintaining that the impact of such values upon our well-being lies in their impact upon our mental lives. (shrink)
On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and (...) B. Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have (...) on our ‘symbolic order’—on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility. (shrink)
Can we test philosophical thought experiments, such as whether people would enter an experience machine or would leave one once they are inside? Dan Weijers argues that since 'rational' subjects (e.g. students taking surveys in college classes) are believable, we can do so. By contrast, I argue that because such subjects will probably have the wrong affect (i.e. emotional states) when they are tested, such tests are almost worthless. Moreover, understood as a general policy, such pretend testing would ruin (...) the results of most psychological tests, such as those of helping behavior, attitudes to authority, moral transgressions, etc. However, I also argue that certain philosophical thought experiments do not require us to have as much (or any) affect to understand them, or to elicit intuitions, and so can be tested. Generally, experimental philosophy must adhere to this limit, on pain of offering vacuous results. (shrink)
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis (...) of the phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)
Measurement is said to be the basis of exact sciences as the process of assigning numbers to matter (things or their attributes), thus making it possible to apply the mathematically formulated laws of nature to the empirical world. Mathematics and empiria are best accorded to each other in laboratory experiments which function as what Nancy Cartwright calls nomological machine: an arrangement generating (mathematical) regularities. On the basis of accounts of measurement errors and uncertainties, I will argue for two claims: (...) 1) Both fundamental laws of physics, corresponding to ideal nomological machine, and phenomenological laws, corresponding to material nomological machine, lie, being highly idealised relative to the empirical reality; and also laboratory measurement data do not describe properties inherent to the world independently of human understanding of it. 2) Therefore the naive, representational view of measurement and experimentation should be replaced with a more pragmatic or practice-based view. (shrink)
Aaron Sloman remarks that a lot of present disputes on consciousness are usually based, on the one hand, on re-inventing “ideas that have been previously discussed at lenght by others”, on the other hand, on debating “unresolvable” issues, such as that about which animals have phenomenal consciousness. For what it’s worth I would make a couple of examples, which are related to certain topics that Sloman deals with in his paper, and that might be useful for introducing some comments in (...) the following of this brief note. (shrink)
Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of (...) causally interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactive structures or bit patterns as digital virtual machines do. (shrink)
Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test of (...) a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol, this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity. (shrink)
Brain machine interface (BMI) technology makes direct communication between the brain and a machine possible by means of electrodes. This paper reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades.
John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without (...) the formal component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the Chinese Room argument. (shrink)
Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out (...) hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. (shrink)
This paper seeks to understand machine cognition. The nature of machine cognition has been shrouded in incomprehensibility. We have often encountered familiar arguments in cognitive science that human cognition is still faintly understood. This paper will argue that machine cognition is far less understood than even human cognition despite the fact that a lot about computer architecture and computational operations is known. Even if there have been putative claims about the transparency of the notion of machine (...) computations, these claims do not hold out in unraveling machine cognition, let alone machine consciousness (if there is any such thing). The nature and form of machine cognition remains further confused also because of attempts to explain human cognition in terms of computation and to model/simulate (aspects of) human cognitive processing in machines. Given that these problems in characterizing machine cognition persist, a view of machine cognition that aims to avoid these problems is outlined. The argument that is advanced is that something becomes a computation in machines only when a human interprets it, which is a kind of semiotic causation. From this it follows that a computing machine is not engaged in a computation unless a human interprets what it is doing; instead, it is engaged in machine cognition, which is defined as a member or subset of the set of all possible mappings of inputs to outputs. The human interpretation, which is a semiotic process, gives meaning to what a machine does, and then what it does becomes a computation. (shrink)
This paper discusses how to refine a given initial legal ontology using an existing MRD (Machine-Readable Dictionary). There are two hard issues in the refinement process. One is to find out those MRD concepts most related to given legal concepts. The other is to correct bugs in a given legal ontology, using the concepts extracted from an MRD. In order to resolve the issues, we present a method to find out the best MRD correspondences to given legal concepts, using (...) two match algorithms. Moreover, another method called a static analysis is given to refine a given legal ontology, based on the comparison between the initial legal ontology and the best MRD correspondences to given legal concepts. We have implemented a software environment to help a user refine a given legal ontology based on these methods. The empirical results have shown that the environment works well in the field of Contracts for the International Sale of Goods. (shrink)
This paper describes a tool for assisting lawyers and paralegal teams during document review in eDiscovery. The tool combines a machine learning technology (CategoriX) and advanced multi-touch interface capable of not only addressing the usual cost, time and accuracy issues in document review, but also of facilitating the work of the review teams by capitalizing on the intelligence of the reviewers and enabling collaborative work.
This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.
The Geneva–Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts’ quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This (...) result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed. (shrink)
We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.
In this paper we discuss the application of a new machine learning approach – Argument Based Machine Learning – to the legal domain. An experiment using a dataset which has also been used in previous experiments with other learning techniques is described, and comparison with previous experiments made. We also tested this method for its robustness to noise in learning data. Argumentation based machine learning is particularly suited to the legal domain as it makes use of the (...) justifications of decisions which are available. Importantly, where a large number of decided cases are available, it provides a way of identifying which need to be considered. Using this technique, only decisions which will have an influence on the rules being learned are examined. (shrink)
The article reports the results from the developmentof four data-driven discovery systems, operating inlinguistics. The first mimics the induction methods ofJohn Stuart Mill, the second performs componentialanalysis of kinship vocabularies, the third is ageneral multi-class discrimination program, and thefourth finds logical patterns in data. These systemsare briefly described and some arguments are offeredin favour of machine linguistic discovery. Thearguments refer to the strength of machines incomputationally complex tasks, the guaranteedconsistency of machine results, the portability ofmachine methods to new (...) tasks and domains, and thepotential machines provide for our gaining newinsights. (shrink)
We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI) and single nucleotide polymorphism (SNP) data. The method consists of four stages: (1) SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME). (2) Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME). (3) Components of fMRI activation obtained with (...) independent component analysis (ICA) are used to construct a single SVM classifier (ICA-SVMC). (4) The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI). The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls). The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder. (shrink)
Examples in the history of Automated Theorem Proving are given, in order to show that even a seemingly ‘mechanical’ activity, such as deductive inference drawing, involves special cultural features and tacit knowledge. Mechanisation of reasoning is thus regarded as a complex undertaking in ‘cultural pruning’ of human-oriented reasoning. Sociological counterparts of this passage from human- to machine-oriented reasoning are discussed, by focusing on problems of man-machine interaction in the area of computer-assisted proof processing.
Machine analogies play a prominent part in biology, especially in areas such as molecular cell biology and related parts of development, neuroscience and genetics. This paper provides an account of what makes a system machine-like, relying on a notion of causal order. It then looks at models and how they may represent a system as being more or less orderly. The (potentially changing) role of machine analogies is illustrated by a look at two examples from present day (...) biology - the study of macromolecules, and theoretical models of pattern formation. (shrink)
The debate about experience-based or tacit knowledge has focused much attention on the limits to formalisation of work process knowledge. A main line of argument has been that, for example, industrial work even with highly advanced technical equipment can only be performed adequately when the worker through experience on the job has gained a feel for the functioning of the machinery and the properties and behaviour of the materials. In this debate links tend to be created between on the one (...) hand formalised-abstracted-verbal knowledge as opposed to on the other hand informalised-concrete-tacit knowledge. We have worked for some years with the design of training materials which at its core have video documentation of best practice as we have found it at work. In this paper we will present and discuss experience with design and use of a hypermedia type training material, SPRING to be used by new machine setters in the spring industry. Based on our own experience we will argue for the relevance of this type of training materials as a means of supporting reflection and dialogue in the community of practitioners. (shrink)
Turing wrote that the “guiding principle” of his investigation into the possibility of intelligent machinery was “The analogy [of machinery that might be made to show intelligent behavior] with the human brain.”  In his discussion of the investigations that Turing said were guided by this analogy, however, he employs a more far-reaching analogy: he eventually expands the analogy from the human brain out to “the human community as a whole.” Along the way, he takes note of an obvious fact (...) in the bigger scheme of things regarding human intelligence: grownups were once children; this leads him to imagine what a machine analogue of childhood might be. In this paper, I’ll discuss Turing’s child-machine, what he said about different ways of educating it, and what impact the “bringing up” of a child-machine has on its ability to behave in ways that might be taken for intelligent. I’ll also discuss how some of the various games he suggested humans might play with machines are related to this approach. (shrink)
Intelligence is not a property unique to the human brain; rather it represents a spectrum of phenomena. An understanding of the evolution of intelligence makes it clear that the evolution of machine intelligence has no theoretical limits — unlike the evolution of the human brain. Machine intelligence will outpace human intelligence and very likely will do so during the lifetime of our children. The mix of advanced machine intelligence with human individual and communal intelligence will create an (...) evolutionary discontinuity as profound as the origin of life. It will presage the end of the human species as we know it. The question, in the author's view, is not whether this will happen, but when, and what should be our response. (shrink)
Inventive Machine project is the matter of discussion. The project aims to develop a family of AI systems for intelligent support of all stages of engineering design.Peculiarities of the IM project:deep and comprehensive knowledge base — the theory of inventive problem solving (TIPS)solving complex problems at the level of inventionsapplication in any area of engineeringstructural prediction of engineering system developmentThe systems of the second generation are described in detail.
This paper focuses on how “Japanese technology” was formed in the Japanese machine tool industry, and presents how Japanese machine tool builders competed in R&D and the innovation process in the domestic and international markets. During the competition for the innovation of computerised numerically-controlled (CNC) tools, drastic changes occurred in the ranking of individual firms. Prior to the transformation, the traditional “Big 5” companies occupied the largest market share. After the innovation, however, the “Big 3” firms which had (...) not been big in size at their origins increased their market share. This paper explains how this change stemmed from different attitudes towards R&D and innovation. (shrink)
During the last years the demand for regionally and culturally harmonised machine design is increasingly on the agenda. The problem of localising products like machine tools instantly poses the question for new procedures that allow including the regional and cultural adaptations into the design processes of machine tool companies. How to transform the general insight into the necessity of culture- and region-adapted technologies and how to embed it into a design procedure comprising applicable design attributes is the (...) crucial problem addressed. The paper shows in an exemplary way how ambiguous design attributes can eventually be embodied in a prototype design. (shrink)
Amyotrophic lateral sclerosis (ALS) is a devastating disease with a lifetime risk of approximately 1 in 2000. Presently diagnosis of ALS relies on clinical assessments for upper motor neuron and lower motor neuron deficits in multiple body segments together with a history of progression of symptoms. In addition, is it common to evaluate lower motor neuron pathology in ALS by electromyography. However, upper motor neuron pathology is solely assessed on clinical grounds hindering diagnosis. In the past decade magnetic resonance methods (...) have been shown to be sensitive to the ALS disease process, namely: resting state connectivity measured with functional MRI, cortical thickness measured by high resolution imaging, diffusion tensor imaging (DTI) metrics such as fractional anisotropy (FA) and radial diffusivity (RD), and more recently magnet resonance spectroscopy measures of gamma-aminobutyric acid (GABA) concentration. In this present work we utilize independent component analysis (ICA) to derive brain networks based on resting state functional magnetic resonance imaging and use those derived networks to build a disease state classify using machine learning (support-vector machine). We show that it is possible to achieve over 71% accuracy for disease state classification. These results are promising for the development of a clinically relevant disease state classifier. Future inclusion of other MR modalities such as high resolution derived cortical thickness, DTI metric and MRS should improve this overall accuracy. (shrink)
Scholars widely assume that the term generation, is preferable to reproduction in the context of early modern history, based on the premise that reproduction to mean procreation was not in use until the end of the eighteenth century. This shift in usage presumably corresponds to the rise of mechanistic philosophy; feminist scholarship, particularly that deriving from the hostile critique fashionable in the 1980s has claimed reproduction is associated with medical practitioners’ perceptions of women as baby-producing machines. However, this interpretation, whether (...) in the interests of gender politics or reiterated in more sympathetic histories, misrepresents the historical record. (shrink)
The truly philosophical issue in machine conscioiusness is whether machines can have 'hard consciounsess' (like in Chalmers' hard problem of consciousness). Criteria for hard consciousness are higher than for phenomenal consciousness, since the latter incorporates first-person functional consciousness.
In this paper we interpret a characterization of the Gödel speed-up phenomenon as providing support for the ‘Nagel-Newman thesis’ that human theorem recognizers differ from mechanical theorem recognizers in that the former do not seem to be limited by Gödel's incompleteness theorems whereas the latter do seem to be thus limited. However, we also maintain that (currently non-existent) programs which are open systems in that they continuously interact with, and are thus inseparable from, their environment, are not covered by the (...) above (or probably any other recursion-theoretic) argument. (shrink)