1 Cognitive Architectures 2 Multilayer Perceptrons 3 Relations between Variables 4 Structured Representations 5 Individuals 6 Where does the Machinery of Symbol Manipulation Come From? 7 Conclusions.
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the (...) field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence. The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust--in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better. (shrink)
The apparent very close similarity between the learning of the past tense by Adam and the Plunkett and Marchman model is exaggerated by several misleading comparisons--including arbitrary, unexplained changes in how graphs were plotted. The model's development differs from Adam's in three important ways: Children show a U-shaped sequence of development which does not depend on abrupt changes in input; U-shaped development in the simulation occurs only after an abrupt change in training regimen. Children overregularize vowel-change verbs more than no-change (...) verbs; the simulation overregularizes vowel-change verbs less often than no-change verbs. Children, including Adam, overregularize more than they irregularize; the simulation overregularized less than it irregularized. Interestingly, the RM model--widely criticized as being inadequate--does somewhat better, correctly overregularizing vowel-change verbs more often than no-change verbs, and overregularizing more often than it irregularizes. Although Plunkett and Marchman's (1993) state of the art model incorporated hidden layers and back-propagation, used a more realistic phonological coding scheme, and explored a broader range of parameters than Rumelhart and McClelland's model, their results are farther from psychological reality. It is unknown whether any connectionist model can mimic a child's performance without resorting to unrealistic exogenous changes in the training or input, but it is clear that adding a hidden-layer and back-propagation does not ensure a solution. (shrink)
Rogers & McClelland's (R&M's) précis represents an important effort to address key issues in concepts and categorization, but few of the simulations deliver what is promised. We argue that the models are seriously underconstrained, importantly incomplete, and psychologically implausible; more broadly, R&M dwell too heavily on the apparent successes without comparable concern for limitations already noted in the literature.
This chapter examines an apparent tension created by recent research on neurological development and genetics on the one hand and cognitive development on the other. It considers what it might mean for intrinsic signals to guide the initial establishment of functional architecture. It argues that an understanding of the mechanisms by which the body develops can inform our understanding of the mechanisms by which the brain develops. It cites the view of developmental neurobiologists Fukuchi-Shimogori and Grove, that the patterning of (...) the part of the brain responsible for our higher functions is coordinated by the same basic mechanisms and signaling protein families used to generate patterning in other embryonic organs. Thus, what's good enough for the body, is good enough for the brain. (shrink)
Criteria that aim to dichotomize cognition into rules and similarity are destined to fail because rules and similarity are not in genuine conflict. It is possible for a given cognitive domain to exploit rules without similarity, similarity without rules, or both (rules and similarity) at the same time.
The mere fact that a particular aspect of mind could offer an adaptive advantage is not enough to show that that property was in fact shaped by that adaptive advantage. Although it is possible that the tendency towards positive illusion is an evolved misbelief, it it also possible that positive illusions could be a by-product of a broader, flawed cognitive mechanism that itself was shaped by accidents of evolutionary inertia.
We find the theory of neural reuse to be highly plausible, and suggest that human individual differences provide an additional line of argument in its favor, focusing on the well-replicated finding of in which individual differences are highly correlated across domains. We also suggest that the theory of neural reuse may be an important contributor to the phenomenon of positive manifold itself.
Connectionist networks excel at extracting statistical regularities but have trouble extracting higher-order relationships. Clark & Thornton suggest that a solution to this problem might come from Elman, but I argue that the success of Elman's single recurrent network is illusory, and show that it cannot in fact represent abstract relationships that can be generalized to novel instances, undermining Clark & Thornton 's key arguments.