The last two decades have produced unprecedented successes in the fields of artificial intelligence and machine learning (ML), due almost entirely to advances in deep neural networks (DNNs). Deep hierarchical memory networks are not a novel concept in cognitive science and can be traced back more than a half century to Simon's early work on discrimination nets for simulating human expertise. The major difference between DNNs and the deep memory nets meant for explaining human cognition is that the latter are (...) symbolic networks meant to model the dynamics of human memory and learning. Cognition-inspired symbolic deep networks (SDNs) address several known issues with DNNs, including (1) learning efficiency, where a much larger number of training examples are required for DNNs than would be expected for a human; (2) catastrophic interference, where what is learned by a DNN gets unlearned when a new problem is presented; and (3) explainability, where there is no way to explain what is learned by a DNN. This paper explores whether SDNs can achieve similar classification accuracy performance to DNNs across several popular ML datasets and discusses the strengths and weaknesses of each approach. Simulations reveal that (1) SDNs provide similar accuracy to DNNs in most cases, (2) SDNs are far more efficient than DNNs, (3) SDNs are as robust as DNNs to irrelevant/noisy attributes in the data, and (4) SDNs are far more robust to catastrophic interference than DNNs. We conclude that SDNs offer a promising path toward human-level accuracy and efficiency in category learning. More generally, ML frameworks could stand to benefit from cognitively inspired approaches, borrowing more features and functionality from models meant to simulate and explain human learning. (shrink)
Successfully explaining and replicating the complexity and generality of human and animal learning will require the integration of a variety of learning mechanisms. Here, we introduce a computational model which integrates associative learning (AL) and reinforcement learning (RL). We contrast the integrated model with standalone AL and RL models in three simulation studies. First, a synthetic grid‐navigation task is employed to highlight performance advantages for the integrated model in an environment where the reward structure is both diverse and dynamic. The (...) second and third simulations contrast the performances of the three models in behavioral experiments, demonstrating advantages for the integrated model in accounting for behavioral data. (shrink)
Reinforcement learning (RL) models of decision-making cannot account for human decisions in the absence of prior reward or punishment. We propose a mechanism for choosing among available options based on goal-option association strengths, where association strengths between objects represent previously experienced object proximity. The proposed mechanism, Goal-Proximity Decision-making (GPD), is implemented within the ACT-R cognitive framework. GPD is found to be more efficient than RL in three maze-navigation simulations. GPD advantages over RL seem to grow as task difficulty is increased. (...) An experiment is presented where participants are asked to make choices in the absence of prior reward. GPD captures human performance in this experiment better than RL. (shrink)
Cognitive science has much to contribute to the general scientific body of knowledge, but it is also a field rife with possibilities for providing background research that can be leveraged by artificial intelligence (AI) developers. In this introduction, we briefly explore the history of AI. We particularly focus on the relationship between AI and cognitive science and introduce this special issue that promotes the method of inspiring AI development with the results of cognitive science research.
Deep Neural Networks (DNNs) are popular for classifying large noisy analogue data. However, DNNs suffer from several known issues, including explainability, efficiency, catastrophic interference, and a need for high‐end computational resources. Our simulations reveal that psychologically‐inspired symbolic deep networks (SDNs) achieve similar accuracy and robustness to noise as DNNs on common ML problem sets, while addressing these issues.