IN L'Homme machine La Mettrie at one point discusses the possibilityof teaching an ape to speak, and later he suggests that just as the inventor Vaucanson had made a mechanical flute player and a mechanical duck, it might be possible some day for ‘another Prometheus’ to make a mechanical man which could talk.
Aspects of an example of simulated shared subjectivity can be used both to support Steven Lehar's remarks on embodied percipients and to triangulate in a novel way the so-called “hard problem” of consciousness which Lehar wishes to “sidestep,” but which, given his other contentions regarding emergent holism, raises questions about whether he has been able or willing to do so.
It is asked to what extent answers to such questions as ?Can machines think??, ?Could robots have feelings?? might be expected to yield insight into traditional mind?body questions. It has sometimes been assumed that answering the first set of questions would be the same as answering the second. Against this approach other philosophers have argued that answering the first set of questions would not help us to answer the second. It is argued that both of these assessments are mistaken. It (...) is then claimed, although not argued in detail, that the following three approaches to the first set of questions are mistaken: (1) machines (and robots) obviously cannot think, feel, create, etc., since they do only what they are programmed to do; (2) on the basis of ah analysis of the meaning of the words ?machine? ('robot?, ?think?, ?feel?, etc.) we can see that in principle it would be impossible for machines (or robots) to think, feel, create, etc.; (3) machines (and robots) obviously can (or could) think, feel, etc., since they do certain things which, if we were to do them, would require thought, feeling, etc. It is argued that, once it is seen why approach (2) is mistaken, it becomes desirable to decline ?in principle? approaches to the first set of questions and to favor ?piecemeal investigations? where attention is centered upon what is actually taking place in machine technology, the development of new programming techniques, etc. Some suggestions are made concerning the relevance of current computer simulation studies to traditional mind?body questions. A new set of questions is proposed as a substitute for the first set of questions. It is hoped that attempts to answer these may provide us with new and detailed portraits of the mind?body relationship. (shrink)
The notion of simulation in dreaming of threat recognition and avoidance faces difficulties deriving from (1) some typical characteristics of dream artifacts (some “surreal,” some not) and (2) metaphysical issues involving the need for some representation in the theory of a perspective subject making use of the artifact. [Hobson et al.; Revonsuo].
Gunderson allows that internally propelled programmed devices (Hauser Robots) do act full-bloodedly under aspects but denies this evidences that they really have the mental properties such acts seem to indicate. Rather, given our intuitive conviction that these machines lack consciousness, such performances evidence the dementalizability (contrary to Searle and Hauser both) of full-blooded acts of detecting, calculating, etc., such machines really do (contrary to Searle) perform.