Search results for 'STRONG AI' (try it on Scholar)

1000+ found
Sort by:
  1. Stephan Zelewski (1991). Die Starke KI-TheseThe Strong AI-Thesis. Journal for General Philosophy of Science 22 (2):337-348.score: 180.0
    Summary The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's ‘proof’ is not conclusive. Though it may be reconstructed (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  2. Andrew Melnyk (1996). Searle's Abstract Argument Against Strong AI. Synthese 108 (3):391-419.score: 132.0
    Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  3. Steffen Borge (2007). A Modal Defence of Strong AI. In Dermot Moran Stephen Voss (ed.), Epistemology. The Proceedings of the Twenty-First World Congress of Philosophy. Vol. 6. The Philosophical Society of Turkey. 127-131.score: 120.0
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  4. Jerome C. Wakefield (2003). The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI. [REVIEW] Minds and Machines 13 (2):285-319.score: 120.0
    I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about (...)
    Direct download (16 more)  
     
    My bibliography  
     
    Export citation  
  5. Ronald L. Chrisley, Weak Strong AI: An Elaboration of the English Reply to the Chinese Room.score: 120.0
    Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  6. Silin Ai (2011). Ai Silin Lun Wen Xuan. Zhonghua Shu Ju.score: 120.0
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  7. Blake H. Dournaee (2010). Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”. Minds and Machines 20 (2):303-309.score: 108.0
    In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...)
    Direct download (12 more)  
     
    My bibliography  
     
    Export citation  
  8. R. Michael Perry (2006). Consciousness as Computation: A Defense of Strong AI Based on Quantum-State Functionalism. In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.score: 102.0
  9. Nicholas Agar (2012). On the Irrationality of Mind-Uploading: A Rely to Neil Levy. [REVIEW] AI and Society 27 (4):431-436.score: 96.0
    In a paper in this journal, Neil Levy challenges Nicholas Agar’s argument for the irrationality of mind-uploading. Mind-uploading is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer. Its advocates suppose that mind-uploading transfers both human minds and identities from biological brains into computers. According to Agar’s original argument, mind-uploading is prudentially irrational. Success relies on the soundness of the program of Strong AI—the view that it may someday be possible (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  10. Philip Cam (1990). Searle on Strong AI. Australasian Journal of Philosophy 68 (1):103-8.score: 90.0
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  11. Aaron Sloman (1986). Did Searle Attack Strong Strong AI or Weak Strong AI? In Artificial Intelligence and its Applications. Chichester.score: 90.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  12. Karl Pfeifer (1992). Searle, Strong AI, and Two Ways of Sorting Cucumbers. Journal of Philosophical Research 17:347-50.score: 90.0
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  13. Roland Puccetti (1980). The Chess Room: Further Demythologizing of Strong AI. Behavioral and Brain Sciences 3 (3):441.score: 90.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  14. Stephan Zelewski (1991). Die Starke KI-These. Journal for General Philosophy of Science 22 (2):337 - 348.score: 90.0
    The Strong AI-Thesis. The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's 'proof' is not conclusive. Though it (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  15. Gerd Gigerenzer (1990). Strong AI and the Problem of “Second-Order” Algorithms. Behavioral and Brain Sciences 13 (4):663-664.score: 90.0
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  16. J. M. Bishop (2000). Redcar Rocks: Strong AI and Panpsychism. Consciousness and Cognition 9 (2):S35 - S35.score: 90.0
     
    My bibliography  
     
    Export citation  
  17. M. Gams (1997). " Strong AI": An Adolescent Disorder. In Matjaz Gams (ed.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: Ios Press. 43--1.score: 90.0
    No categories
    Direct download  
     
    My bibliography  
     
    Export citation  
  18. Georges Rey (2003). Searle's Misunderstandings of Functionalism and Strong AI. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. 201--225.score: 90.0
     
    My bibliography  
     
    Export citation  
  19. Burton Voorhees (1999). Godel's Theorem and Strong Ai: Is Reason Blind? In S. Smets J. P. Van Bendegem G. C. Cornelis (ed.), Metadebates on Science. Vub-Press and Kluwer. 6--43.score: 90.0
    No categories
     
    My bibliography  
     
    Export citation  
  20. Robert I. Damper (2006). The Logic of Searle's Chinese Room Argument. Minds and Machines 16 (2):163-183.score: 66.0
    John Searle’s Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that “the appropriately programmed computer really is a mind”. Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. Although the general consensus among commentators is that the CRA is flawed, and not withstanding the popularity (...)
    Direct download (12 more)  
     
    My bibliography  
     
    Export citation  
  21. Ricardo Restrepo Echavarria (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 60.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  22. Bernard Molyneux (2012). How the Problem of Consciousness Could Emerge in Robots. Minds and Machines 22 (4):277-297.score: 60.0
    I show how a robot with what looks like a hard problem of consciousness might emerge from the earnest attempt to make a robot that is smart and self-reflective. This problem arises independently of any assumption to the effect that the robot is conscious, but deserves to be thought of as related to the human problem in virtue of the fact that (1) the problem is one the robot encounters when it tries to naturalistically reduce its own subjective states (2) (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  23. Ricardo Restrepo Echavarria (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 60.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  24. Slawomir J. Nasuto, John Mark Bishop, Etienne B. Roesch & Matthew C. Spencer (forthcoming). Zombie Mouse in a Chinese Room. Philosophy and Technology:1-15.score: 60.0
    John Searle’s Chinese Room Argument (CRA) purports to demonstrate that syntax is not sufficient for semantics, and, hence, because computation cannot yield understanding, the computational theory of mind, which equates the mind to an information processing system based on formal computations, fails. In this paper, we use the CRA, and the debate that emerged from it, to develop a philosophical critique of recent advances in robotics and neuroscience. We describe results from a body of work that contributes to blurring the (...)
    No categories
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  25. Ricardo Restrepo (2009). Russell's Structuralism and the Supposed Death of Computational Cognitive Science. Minds and Machines 19 (2):181-197.score: 60.0
    John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  26. Raffaela Giovagnoli (2013). Representation, Analytic Pragmatism and AI. In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. 161--169.score: 48.0
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  27. Barbara Warnick (2004). Rehabilitating AI: Argument Loci and the Case for Artificial Intelligence. [REVIEW] Argumentation 18 (2):149-170.score: 48.0
    This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...)
    Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  28. André Kukla (1994). Medium AI and Experimental Science. Philosophical Psychology 7 (4):493-5012.score: 48.0
    It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  29. Matjaz Gams (ed.) (1997). Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.score: 42.0
  30. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. [REVIEW] Minds and Machines 22 (4):299-324.score: 42.0
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act (...)
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  31. Colin Beardon (1994). Computers, Postmodernism and the Culture of the Artificial. AI and Society 8 (1):1-16.score: 36.0
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  32. John R. Searle (1980). Minds, Brains and Programs. Behavioral and Brain Sciences 3 (3):417-57.score: 30.0
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and <span class='Hi'>test</span> hypotheses in a more (...)
    Direct download (13 more)  
     
    My bibliography  
     
    Export citation  
  33. Daniel C. Dennett (1989). Murmurs in the Cathedral: Review of R. Penrose, The Emperor's New Mind. [REVIEW] Times Literary Supplement (September) 29.score: 30.0
    The idea that a computer could be conscious--or equivalently, that human consciousness is the effect of some complex computation mechanically performed by our brains--strikes some scientists and philosophers as a beautiful idea. They find it initially surprising and unsettling, as all beautiful ideas are, but the inevitable culmination of the scientific advances that have gradually demystified and unified the material world. The ideologues of Artificial Intelligence (AI) have been its most articulate supporters. To others, this idea is deeply repellent: philistine, (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  34. Larry Hauser (1997). Searle's Chinese Box: Debunking the Chinese Room Argument. [REVIEW] Minds and Machines 7 (2):199-226.score: 30.0
    John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) (...)
    Direct download (21 more)  
     
    My bibliography  
     
    Export citation  
  35. John McCarthy, John Searle's Chinese Room Argument.score: 30.0
    John Searle begins his (1990) ``Consciousness, Explanatory Inversion and Cognitive Science'' with
    ``Ten years ago in this journal I published an article (Searle, 1980a and 1980b) criticising what I call Strong
    AI, the view that for a system to have mental states it is sufficient for the system to implement the right sort of
    program with right inputs and outputs. Strong AI is rather easy to refute and the basic argument can be
    summarized in one sentence: {it (...)
    The Chinese Room Argument can be refuted in one sentence. (shrink)
    Direct download  
     
    My bibliography  
     
    Export citation  
  36. Larry Hauser, Searle's Chinese Room Argument. Field Guide to the Philosophy of Mind.score: 30.0
    John Searle's 1980a) thought experiment and associated 1984a) argument is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (roughly, someday will) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't suffice_ _for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  37. Larry Hauser, The Chinese Room Argument.score: 30.0
    _The Chinese room argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  38. John R. Searle (2001). The Failures of Computationalism. Http.score: 30.0
    Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine (...)
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  39. John Mark Bishop (2009). Why Computers Can't Feel Pain. Minds and Machines 19 (4):507-516.score: 30.0
    The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, (...)
    Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  40. Aaron Sloman, What is It Like to Be a Rock?score: 30.0
    This paper aims to replace deep sounding unanswerable, time-wasting pseudo- questions which are often posed in the context of attacking some version of the strong AI thesis, with deep, discovery-driving, real questions about the nature and content of internal states of intelligent agents of various kinds. In particular the question.
    Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  41. Setargew Kenaw (2008). Hubert L. Dreyfus's Critique of Classical AI and its Rationalist Assumptions. Minds and Machines 18 (2):227-238.score: 30.0
    This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as “classical AI,” assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  42. David Joslin (2006). Real Realization: Dennett's Real Patterns Versus Putnam's Ubiquitous Automata. [REVIEW] Minds and Machines 16 (1):29-41.score: 30.0
    Both Putnam and Searle have argued that that every abstract automaton is realized by every physical system, a claim that leads to a reductio argument against Cognitivism or Strong AI: if it is possible for a computer to be conscious by virtue of realizing some abstract automaton, then by Putnam’s theorem every physical system also realizes that automaton, and so every physical system is conscious—a conclusion few supporters of Strong AI would be willing to accept. Dennett has suggested (...)
    Direct download (15 more)  
     
    My bibliography  
     
    Export citation  
  43. Larry Hauser, Chinese Room Argument. Internet Encyclopedia of Philosophy.score: 30.0
    The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong (...)
    Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  44. William J. Rapaport (1986). Searle's Experiments with Thought. Philosophy of Science 53 (June):271-9.score: 30.0
    A critique of several recent objections to John Searle's Chinese-Room Argument against the possibility of "strong AI" is presented. The objections are found to miss the point, and a stronger argument against Searle is presented, based on a distinction between "syntactic" and "semantic" understanding.
    Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  45. Reinaldo Bernal Velásquez (2012). E-Physicalism. A Physicalist Theory of Phenomenal Consciousness. Ontos Verlag.score: 30.0
    This work advances a theory in the metaphysics of phenomenal consciousness, which the author labels “e-physicalism”. Firstly, he endorses a realist stance towards consciousness and physicalist metaphysics. Secondly, he criticises Strong AI and functionalist views, and claims that consciousness has an internal character. Thirdly, he discusses HOT theories, the unity of consciousness, and holds that the “explanatory gap” is not ontological but epistemological. Fourthly, he argues that consciousness is not a supervenient but an emergent property, not reducible and endowed (...)
     
    My bibliography  
     
    Export citation  
  46. Aaron Sloman (1992). The Emperor's Real Mind -- Review of Roger Penrose's The Emperor's New Mind: Concerning Computers Minds and the Laws of Physics. Artificial Intelligence 56 (2-3):355-396.score: 30.0
    "The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards (...)
    Direct download  
     
    My bibliography  
     
    Export citation  
  47. C. T. A. Schmidt (2005). Of Robots and Believing. Minds and Machines 15 (2):195-205.score: 30.0
    Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought forth: (...)
    Direct download (14 more)  
     
    My bibliography  
     
    Export citation  
  48. Neal Jahren (1990). Can Semantics Be Syntactic? Synthese 82 (3):309-28.score: 30.0
    The author defends John R. Searle's Chinese Room argument against a particular objection made by William J. Rapaport called the Korean Room. Foundational issues such as the relationship of strong AI to human mentality and the adequacy of the Turing Test are discussed. Through undertaking a Gedankenexperiment similar to Searle's but which meets new specifications given by Rapaport for an AI system, the author argues that Rapaport's objection to Searle does not stand and that Rapaport's arguments seem convincing only (...)
    Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  49. Mark Sprevak (2005). The Chinese Carnival. Studies in History and Philosophy of Science Part A 36 (1):203-209.score: 30.0
    In contrast to many areas of contemporary philosophy, something like a carnival atmosphere surrounds Searle’s Chinese room argument. Not many recent philosophical arguments have exerted such a pull on the popular imagination, or have produced such strong reactions. People from a wide range of fields have expressed their views on the argument. The argument has appeared in Scientific American, television shows, newspapers, and popular science books. Preston and Bishop’s recent volume of essays reflects this interdisciplinary atmosphere. The volume includes (...)
    Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  50. Darren Whobrey (2001). Machine Mentality and the Nature of the Ground Relation. Minds and Machines 11 (3):307-346.score: 30.0
    John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes (...)
    Direct download (18 more)  
     
    My bibliography  
     
    Export citation  
1 — 50 / 1000