The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT (...) program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the Turing Test, if we exclude arguments that cognitive science is simply not possible. (shrink)
O'Brien & Opie's theory fails to address the issue of consciousness and introspection. They take for granted that once something is experienced, it can be commented on. But introspection requires neural structures that, according to their theory, have nothing to do with experience as such. That makes the tight coupling between the two in humans a mystery.
People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, (...) which stands for irresistible framework intuition, for an implicit belief that can come into conflict with an explicit one. IFIs are a key element of any theory of consciousness that explains qualia and other aspects of phenomenology as second-order beliefs about perception. Before I can survey the IFI landscape, I review evidence that the brains of humans, and presumably of other intelligent agents, consist of many specialized modules that are capable of sharing a unified workspace on urgent occasions, and jointly model themselves as a single agent. I also review previous work relevant to my subject. Then I explore several IFIs, starting with, 'My future actions are free from the control of physical laws'. Most of them are universal, in the sense that they will be shared by any intelligent agent; the case must be argued for each IFI. When made explicit, IFIs may look dubious or counterproductive, but they really are irresistible, so we find ourselves in the odd position of oscillating between justified beliefs and conflicting but irresistible beliefs. We cannot hope that some process of argumentation will resolve the conflict. (shrink)
Assuming our understanding of the brain continues to advance, we will at some point have a computational theory of how access consciousness works. Block's supposed additional kind of consciousness will not appear in this theory, and continued belief in it will be difficult to sustain. Appeals to to experience such-and-such will carry little weight when we cannot locate a subject for whom it might be like something.
Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any solution to the semantic problem (...) that works for them will work for most other computational systems. (shrink)
Logic is useful as a neutral formalism for expressing the contents of mental representations. It can be used to extract crisp conclusions regarding the higher-order theory of phenomenal consciousness developed in (McDermott 2001, 20007). A key aspect of conscious perceptions is their connection to the distinction between appearance and reality. Perceptions must often be corrected. To do so requires that the logic of perception be able to represent the logical structure of judgment events, that is, to include the formulas of (...) the logic as objects to be reasoned about. However, there is a limit to how finely humans can examine their own representations. Terms representing primary and secondary qualities seemed to be _locked,_ so that the numbers (or levels of neural activation) that are their essence are not directly accessible. Humans feel a need to invoke ``intrinsic,'' ``nonrelational'' properties of many secondary qualities --- their _qualia_ --- to ``explicate'' how we compare and discriminate among them, although this is not actually how the comparisons are accomplished. This model of qualia explains several things: It accounts for the difference between ``normal'' and ``introspective'' access to a perceptual module in terms of quotation. It dissolves Jackson's knowledge argument by explaining what Mary learns as a fictional but undoubtable belief structure. It makes spectrum inversion logically impossible by providing a degree of freedom between the physical structure of the brain and the representations it contains that redescribes putative cases of spectrum inversion as alternative but equivalent ways of mapping physical states to representational states. (shrink)
Zombies are hypothetical creatures identical to us in behavior and internal functionality, but lacking experience. When the concept of zombie is examined in careful detail, it is found that the attempt to keep experience out does not work. So the concept of zombie is the same as the concept of person. Because they are only trivially conceivable, zombies are in a sense inconceivable.
Page generated Thu Aug 5 05:16:05 2021 on philpapers-web-65948fd446-qrpbq
cache stats: hit=20941, miss=16030, save= autohandler : 1491 ms called component : 1467 ms search.pl : 1255 ms render loop : 1235 ms next : 642 ms addfields : 537 ms publicCats : 487 ms menu : 139 ms save cache object : 105 ms retrieve cache object : 66 ms autosense : 57 ms match_cats : 36 ms quotes : 34 ms prepCit : 27 ms match_other : 18 ms search_quotes : 17 ms intermediate : 10 ms initIterator : 7 ms applytpl : 7 ms match_authors : 2 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms