Working memory limits are best defined in terms of the complexity of the relations that can be processed in parallel. Complexity is defined as the number of related dimensions or sources of variation. A unary relation has one argument and one source of variation; its argument can be instantiated in only one way at a time. A binary relation has two arguments, two sources of variation, and two instantiations, and so on. Dimensionality is related to the number of chunks, because (...) both attributes on dimensions and chunks are independent units of information of arbitrary size. Studies of working memory limits suggest that there is a soft limit corresponding to the parallel processing of one quaternary relation. More complex concepts are processed by or In segmentation, tasks are broken into components that do not exceed processing capacity and can be processed serially. In conceptual chunking, representations are to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Neural net models of relational representations show that relations with more arguments have a higher computational cost that coincides with experimental findings on higher processing loads in humans. Relational complexity is related to processing load in reasoning and sentence comprehension and can distinguish between the capacities of higher species. The complexity of relations processed by children increases with age. Implications for neural net models and theories of cognition and cognitive development are discussed. (shrink)
The core issue of our target article concerns how relational complexity should be assessed. We propose that assessments must be based on actual cognitive processes used in performing each step of a task. Complexity comparisons are important for the orderly interpretation of research findings. The links between relational complexity theory and several other formulations, as well as its implications for neural functioning, connectionist models, the roles of knowledge, and individual and developmental differences, are considered.
Minds are said to be systematic: the capacity to entertain certain thoughts confers to other related thoughts. Although an important property of human cognition, its implication for cognitive architecture has been less than clear. In part, the uncertainty is due to lack of precise accounts on the degree to which cognition is systematic. However, a recent study on learning transfer provides one clear example. This study is used here to compare transfer in humans and feedforward networks. Simulations and analysis show, (...) that while feedforward networks with shared weights are capable of exhibiting transfer, they cannot support the same degree of transfer as humans. One interpretation of these results is that common connectionist models lack explicit internal representations permitting rapid learning. (shrink)
In this survey study of 4735 US adults, respondents of all demographic and political affiliations agreed with prioritizing COVID-19 vaccine access for health care workers, adults of any age with serious comorbid conditions, frontline workers (eg, teachers and grocery workers), and Black, Hispanic, Native American, and other communities that have been disproportionately affected by COVID-19. Older adult respondents were less likely than younger respondents to list healthy people older than 65 years as 1 of their top 4 priority groups. These (...) findings suggest that the US public agrees with the high-priority groups proposed by the National Academies of Science, Engineering, and Medicine but appears to disagree with approaches advanced by others that prioritize older adults but not essential workers or disproportionately affected communities. (shrink)
Perruchet & Vinter claim that with the additional capacity to determine whether two arbitrary stimuli are the same or different, their association-based PARSER model is sufficient to account for learning transfer. This claim overstates the generalization capacity of perceptual versus nonperceptual (symbolic) relational processes. An example shows why some types of learning transfer also require the capacity to bind arbitrary representations to nonperceptual relational symbols.
Analogy by priming learned transformations of (causally) related objects fails to explain an important class of inference involving abstract source-target relations. This class of analogical inference extends to ad hoc relationships, precluding the possibility of having learned them as object transformations. Rather, objects may be placed into momentarily corresponding, symbolic, source-target relationships just to complete an analogy.
We propose that the missing link from nonhuman to human cognition lies with our ability to form, modify, and re-form dynamic bindings between internal representations of world-states. This capacity goes beyond dynamic feature binding in perception and involves a new conception of working memory. We propose two tests for structured knowledge that might alleviate the impasse in empirical research in nonhuman animal cognition.
Cowan's review shows that a short-term memory limit of four items is consistent with a wide range of phenomena in the field. However, he does not explain that limit, whereas an existing theory does offer an explanation for capacity limitations. Furthermore, processing capacity limits cannot be reduced to storage limits as Cowan claims.