Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking (...) machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models. (shrink)
Constructing an intuitive theory from data confronts learners with a “chicken‐and‐egg” problem: The laws can only be expressed in terms of the theory's core concepts, but these concepts are only meaningful in terms of the role they play in the theory's laws; how can a learner discover appropriate concepts and laws simultaneously, knowing neither to begin with? We explore how children can solve this chicken‐and‐egg problem in the domain of magnetism, drawing on perspectives from computational modeling and behavioral experiments. We (...) present 4‐ and 5‐year‐olds with two different simplified magnet‐learning tasks. Children appropriately constrain their beliefs to two hypotheses following ambiguous but informative evidence. Following a critical intervention, they learn the correct theory. In the second study, children infer the correct number of categories given no information about the possible causal laws. Children's hypotheses in these tasks are explained as rational inferences within a Bayesian computational framework. (shrink)
Humans routinely make inferences about both the contents and the workings of other minds based on observed actions. People consider what others want or know, but also how intelligent, rational, or attentive they might be. Here, we introduce a new methodology for quantitatively studying the mechanisms people use to attribute intelligence to others based on their behavior. We focus on two key judgments previously proposed in the literature: judgments based on observed outcomes (you're smart if you won the game) and (...) judgments based on evaluating the quality of an agent's planning that led to their outcomes (you're smart if you made the right choice, even if you didn't succeed). We present a novel task, the maze search task (MST), in which participants rate the intelligence of agents searching a maze for a hidden goal. We model outcome‐based attributions based on the observed utility of the agent upon achieving a goal, with higher utilities indicating higher intelligence, and model planning‐based attributions by measuring the proximity of the observed actions to an ideal planner, such that agents who produce closer approximations of optimal plans are seen as more intelligent. We examine human attributions of intelligence in three experiments that use MST and find that participants used both outcome and planning as indicators of intelligence. However, observing the outcome was not necessary, and participants still made planning‐based attributions of intelligence when the outcome was not observed. We also found that the weights individuals placed on plans and on outcome correlated with an individual's ability to engage in cognitive reflection. Our results suggest that people attribute intelligence based on plans given sufficient context and cognitive resources and rely on the outcome when computational resources or context are limited. (shrink)
We were encouraged by the broad enthusiasm for building machines that learn and think in more human-like ways. Many commentators saw our set of key ingredients as helpful, but there was disagreement regarding the origin and structure of those ingredients. Our response covers three main dimensions of this disagreement: nature versus nurture, coherent theories versus theory fragments, and symbolic versus sub-symbolic representations. These dimensions align with classic debates in artificial intelligence and cognitive science, although, rather than embracing these debates, we (...) emphasize ways of moving beyond them. Several commentators saw our set of key ingredients as incomplete and offered a wide range of additions. We agree that these additional ingredients are important in the long run and discuss prospects for incorporating them. Finally, we consider some of the ethical questions raised regarding the research program as a whole. (shrink)
Cushman's rationalization account can be extended to cover another part of his portrayal of representational exchange: thought experiments that lead to conclusions about the self. While Cushman's argument is compelling, a full account of rationalization as adaptive will need to account for the divergence in rationalizing one's actions compared to the actions of others.