An application of the Method of Analysis of Relational Complexity (MARC) to suppositional reasoning in the knight-knave task is outlined. The task requires testing suppositions derived from statements made by individuals who either always tell the truth or always lie. Relational complexity (RC) is defined as the number of unique entities that need to be processed in parallel to arrive at a solution. A selection of five ternary and five quaternary items were presented to 53 psychology (...) students using a pencil and paper format. A computer-administered version was presented to 50 students. As predicted, quaternary problems were associated with higher error rates and longer response times than ternary problems. The computer-administered form was more difficult than the pencil and paper version of the test. These differences are discussed in terms of RC theory and alternative processing accounts. Together, they indicate that the relational complexity metric is a useful and parsimonious way to quantify complexity of reasoning tasks. (shrink)
The confusion/non-consequential thinking explanation proposed by Newstead, Girotto, and Legrenzi (1995) for poor performance on Wason's THOG problem (a hypothetico-deductive reasoning task) was examined in three experiments with 300 participants. In general, as the cognitive complexity of the problem and the possibility of non-consequential thinking were reduced, correct performance increased. Significant but weak facilitation (33-40% correct) was found in Experiment 1 for THOG classification instructions that did not include the indeterminate response option. Substantial facilitation (up to 75% correct) (...) was obtained in Experiment 2 with O'Brien et al.'s (1990) one-other-THOG classification instruction. In Experiment 3, a revised version of O'Brien et al.'s pre-test problem format also led to substantial facilitation, even with the use of the standard three-choice THOG classification instruction. These findings are discussed in terms of Newstead et al.'s theoretical proposal and possible attentional factors. (shrink)
In this study both adolescents with autism spectrum disorder (ASD) and typically developing controls were presented with conditional reasoning problems using familiar content. In this task both valid and fallacious conditional inferences that would otherwise be drawn can be suppressed if counterexample cases are brought to mind. Such suppression occurs when additional premises are presented, whose effect is to suggest such counterexample cases. In this study we predicted and observed that this suppression effect was substantially and significantly weaker for (...) autistic participants. We take this as evidence that autistics are less contextualised in their reasoning, a finding that can be linked to research on autism on a variety of other cognitive tasks. (shrink)
We discuss how modified dual-task approaches may be used to verify the degree to which cognitive tasks are capacity demanding. We also delineate some of the complexities associated with the use of the “double easy-to-hard” paradigm for testing claim of Halford, Wilson & Phillips that hierarchical reasoning imposes processing demands equivalent to those of transitive reasoning.
Three experiments investigated the relationship between subjective experience and attentional lapses during sustained attention. These experiments employed two measures of subjective experience to examine how differences in awareness correspond to variations in both task performance and psycho-physiological measures . This series of experiments examine these phenomena during the Sustained Attention to Response Task . The results suggest we can dissociate between two components of subjective experience during sustained attention: task unrelated thought which corresponds to an absent minded (...) disengagement from the task and a pre-occupation with one's task performance that seems to be best conceptualised as a strategic attempt to deploy attentional resources in response to a perception of environmental demands which exceed ones ability to perform the task. The implications of these findings for our understanding of how awareness is maintained on task relevant material during periods of sustained attention are discussed. (shrink)
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of taskcomplexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture–word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential (...) sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. (shrink)
Observers have difficulty detecting visual changes. However, they are unaware of this inability, suggesting that people do not have an accurate understanding of visual processes. We explored whether this error is related to participants’ beliefs about the roles of intention and scene complexity in detecting changes. In Experiment 1 participants had a higher failure rate for detecting changes in an incidental change detection task than an intentional change detection task. This effect of intention was greatest for complex (...) scenes. However, participants predicted equal levels of change detection for both types of changes across scene complexity. In Experiment 2, emphasizing the differences between intentional and incidental tasks allowed participants to make predictions that were less inaccurate. In Experiment 3, using more sensitive measures and accounting for individual differences did not further improve predictions. These findings suggest that adults do not fully understand the role of intention and scene complexity in change detection. (shrink)
The disavowal of positivist science by many educational researchers has resulted in a deepening polarization of research agendas and an epistemological divide that appears increasingly difficult to span. Despite a turning away from science altogether by some, and thus toward various forms of poststructuralist inquiry, this has not held back the renewed entrenchment of more narrow definitions by policy elites of what constitutes scientific educational research. The new sciences of complexity signal the emergence of a new scientific paradigm that (...) challenges some of the core assumptions of positivism, while offering the potential to develop a new kind of social science that demands both rigour and imagination in coming to understand the emergence and behaviours of social systems and the subsystems that comprise them. The language, concepts and principles of complexity are central to the development of a new science of qualities to complement the science of quantities that has shaped our understanding of the physical and social worlds. Accomplishing this task promises to 1) open up new investigations that have thus far been beyond the purview of scientific study, 2) allow the study of social phenomena as fully embodied, or at least as more robust models than those represented in the abstracted empiricism upon which the sciences of quantities are predicated, and 3) allow for more coarse‐grained explanations and predictions of social phenomena to be legitimated as scientific. Both educational research and educational practice stand to gain from this expansion of the scientific repertoire to include rigorous and imaginative investigations of phenomena characterized by change and transformation. (shrink)
In recent years a new conceptual tool called Complexity Theory has come to the attention of scientists and philosophers. This approach is concerned with the emergent properties of interacting systems. It has found wide applicability from cosmology to Social Structure Analysis. However, practitioners are still struggling to find the best way to define complexity and then to measure it. A new book Complexity and the arrow of time by Lineweaver et al. contains contributions from scholars who provide (...) critical reviews of Complexity Theory and its wider applications. This is a huge task and this essay examines how well the authors have succeeded in satisfying the claim made by the book’s three editors to have clarified the leading questions. I also explore the application of Complexity Theory to Biology as a means to explain the popular view that biological complexity has increased over time. In this regard, I conclude by recommending an Information Theory approach which urges that physical complexity arises from accumulation of genetic coding sequences. (shrink)
Electoral control refers to attempts by an election's organizer to influence the outcome by adding/deleting/partitioning voters or candidates. The important paper of Bartholdi, Tovey, and Trick  that introduces control proposes computational complexity as a means of resisting control attempts: Look for election systems where the chair's task in seeking control is itself computationally infeasible.We introduce and study a method of combining two or more candidate-anonymous election schemes in such a way that the combined scheme possesses all the (...) resistances to control possessed by any of its constituents: It combines their strengths. From this and new resistance constructions, we prove for the first time that there exists a neutral, anonymous election scheme that is resistant to all twenty standard types of electoral control. (shrink)
The core issue of our target article concerns how relational complexity should be assessed. We propose that assessments must be based on actual cognitive processes used in performing each step of a task. Complexity comparisons are important for the orderly interpretation of research findings. The links between relational complexity theory and several other formulations, as well as its implications for neural functioning, connectionist models, the roles of knowledge, and individual and developmental differences, are considered.
Measurements of the dimensionality of chaotic attractors obtained on behavioral data represent the taskcomplexity and also could be hypothesized to reflect the number of synchronized neural groups involved in the generation of the data. The changes in dimensionality for different experimental conditions suggest that limited processing capacity, taskcomplexity, demand, and synchrony in neural firing might be closely related.