Tested the 2-process theory of detection, search, and attention presented by the current authors in a series of experiments. The studies demonstrate the qualitative difference between 2 modes of informationprocessing: automatic detection and controlled search; trace the course of the learning of automatic detection, of categories, and of automatic-attention responses; and show the dependence of automatic detection on attending responses and demonstrate how such responses interrupt controlled processing and interfere with the focusing of attention. The learning (...) of categories is shown to improve controlled search performance. A general framework for humaninformationprocessing is proposed. The framework emphasizes the roles of automatic and controlled processing. The theory is compared to and contrasted with extant models of search and attention. (shrink)
Investigations of the function of consciousness in humaninformationprocessing have focused mainly on two questions: (1) where does consciousness enter into the informationprocessing sequence and (2) how does conscious processing differ from preconscious and unconscious processing. Input analysis is thought to be initially "preconscious," "pre-attentive," fast, involuntary, and automatic. This is followed by "conscious," "focal-attentive" analysis which is relatively slow, voluntary, and flexible. It is thought that simple, familiar stimuli can be (...) identified preconsciously, but conscious processing is needed to identify complex, novel stimuli. Conscious processing has also been thought to be necessary for choice, learning and memory, and the organization of complex, novel responses, particularly those requiring planning, reflection, or creativity. (shrink)
Utility maximization is a key element of a number of theoretical approaches to explaining human behavior. Among these approaches are rational analysis, ideal observer theory, and signal detection theory. While some examples of these approaches define the utility maximization problem with little reference to the bounds imposed by the organism, others start with, and emphasize approaches in which bounds imposed by the informationprocessing architecture are considered as an explicit part of the utility maximization problem. These latter (...) approaches are the topic of this issue of the journal. (shrink)
This article examines infant attachment styles from the perspective of cognitive and emotional subjectivity. We review new data that show that individual differences in infants’ attachment behaviors in the traditional Strange Situation are related to (a) infants’ subjective construals of infant—caregiver interactions, (b) their attention to emotional expressions, and (c) polymorphisms in the oxytocin receptor (OXTR) gene. We use these findings to argue that individual differences in infants’ attachment styles reflect, in part, the subjective outcomes of objective experience as filtered (...) through genetic biases in socioemotional informationprocessing. (shrink)
The humaninformation-acquisition process is one of the unifying mechanisms of the behavioral sciences. Three examples (from psychology, neuroscience, and political science) demonstrate that through inspection of this process, better understanding and hence more powerful models of human behavior can be built. The target method for this – process tracing – could serve as a central player in this building process of a unified framework. (Published Online April 27 2007).
A psycho-historical framework for the science of art appreciation will be an experimental discipline that may shed new light on the highest capacities of the human brain, yielding new scientific ways to talk about the art appreciation. The recent findings of the contextual informationprocessing in the human brain make the concept of the art-historical context clear for empirical experimentation.
Floridi’s Theory of Strongly Semantic Information posits the Veridicality Thesis. One motivation is that it can serve as a foundation for information-based epistemology being an alternative to the tripartite theory of knowledge. However, the Veridicality thesis is false, if ‘information’ is to play an explanatory role in human cognition. Another motivation is avoiding the so-called Bar-Hillel/Carnap paradox. But this paradox only seems paradoxical, if ‘information’ and ‘informativeness’ are synonymous, logic is a theory of inference, or (...) validity suffices for rational inference; a, b, and c are false. (shrink)
Chimpanzee/human technological differences are vast, reflect multiple interacting behavioral processes, and may result from the increased information-processing and hierarchical mental constructional capacities of the human brain. Therefore, advanced social, technical, and communicative capacities probably evolved together in concert with increasing brain size. Interpretations of these evolutionary and species differences as continuities or discontinuities reflect differing scientific perspectives.
The productivity of (human) informationprocessing as an economic activity is a question that is raising some interest. Using Marschak's evaluation framework, Radner and Stiglitz have shown that, under certain conditions, the production function of this activity has increasing marginal returns in its initial stage. This paper shows that, under slightly different conditions, this informationprocessing function has repeated convexities with ongoing processing activity. Even for smooth changes in the signals' likelihoods, the function is (...) only piecewise smooth with non-differentiable convexities at points of conditional changes of action. For linear likelihood functions the processing value proves to be piecewise linear with convexities at these levels. (shrink)
Since the introduction of the computer in the early 1950's, the investigation of artificial intelligence has followed three chief avenues: the discovery of self-organizing systems; the building of working models of human behavior, incorporating specific psychological theories; and the building of "heuristic" machines, without bias in favor of humanoid characteristics. While this work has used philosophical logic and its results may illustrate philosophical problems, the artificial intelligence program is by now an intricate, organized specialty. This book, therefore, has a (...) quite specialized audience of its own although it can be very valuable to those philosophers who are interested and competent in using this pioneering material. Five scientific papers report attempts to solve five different kinds of problems. Bertram Raphael describes an attempt to build a memory structure that converts the information input into a systematic model by "understanding" the informational statements as they are made. Daniel Bobrow's machine can set up algebraic equations from informal verbal statements. M. Ross Quillan asks: "What sort of representational format can permit the 'meanings' of words to be stored?" Thomas Evans' machine, Analogy, serves as a model for "pattern-recognition" rather than the "common-property" method of semantic memory. Fischer Black has developed a logical deduction mechanism for question-answering which keeps track of where we are and avoids endless deduction. The editor and John McCarthy contribute more general chapters, providing the historical background of cybernetics, and dealing with the problem of formalizing a concept of causality. Minsky ends the volume with his view that our convictions on dualism, consciousness, free will, and the like are used in the attempt to explain the complicated interactions between parts of our model of ourselves.--M. B. M. (shrink)
In spite of the tremendous progress in recent decades of biological science, many aspects of the behaviour of organisms in general and of humans in particular remain still somewhat obscure. A new approach towards the study of the behaviour of man was presented by Heisenberg when he emphasized that a Cartesian view of nature as an object out there is an illusion in so far as the observer is always part of the formula, the man viewing nature must be figured (...) in, the experimenter into his experiment and the artist in the scene he paints. (Heisenberg, 1969).The present study is an attempt to make a step forward in this direction by focusing on the ways and means of involvement of the observer which make him an indelible part of the observation. (shrink)
It is often thought that there is one key design principle or at best a small set of design principles, underlying the success of biological organisms. Candidates include neural nets, ‘swarm intelligence’, evolutionary computation, dynamical systems, particular types of architecture or use of a powerful uniform learning mechanism, e.g. reinforcement learning. All of those support types of self-organising, self-modifying behaviours. But we are nowhere near understanding the full variety of powerful information-processing principles ‘discovered’ by evolution. By attending closely (...) to the diver- sity of biological phenomena we may gain key insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems. (shrink)
The dorsal anterior cingulate cortex is a key node in the human salience network. It has been ascribed motor, pain-processing and affective functions. However, the dynamics of information flow in this complex region and how it responds to inputs remain unclear and are difficult to study using non-invasive electrophysiology. The area is targeted by neurosurgery to treat neuropathic pain. During deep brain stimulation surgery, we recorded local field potentials from this region in humans during a decision-making task (...) requiring motor output. We investigated the spatial and temporal distribution of information flow within the dACC. We demonstrate the existence of a distributed network within the anterior cingulate cortex where discrete nodes demonstrate directed communication following inputs. We show that this network anticipates and responds to the valence of feedback to actions. We further show that these network dynamics adapt following learning. Our results provide evidence for the integration of learning and the response to feedback in a key cognitive region. (shrink)
This paper presents a critique of cognitive psychology's micro-process program, as well as suggestions for a more scientifically and pragmatically viable approach to cognition. The paper proceeds in the following sequence. First, the mainstream point of view of contemporary cognitive psychology regarding cognitive micro-processes is summarized. Second, this view is criticized. Third and finally, cognitive science's neuropsychology program is discussed, not with respect to the considerable value of its findings, but with respect to the interpretation that would appropriately be placed (...) on them. Throughout this discussion, an alternative position is advanced--namely, that cognitive processes are best viewed, on both scientific and pragmatic grounds, as private or mental versions of well-understood human social practices 2012 APA, all rights reserved). (shrink)
The functional neural mechanisms underlying the cognitive benefits of aerobic exercise have been a subject of ongoing research in recent years. However, while most neuroimaging studies to date which examined functional neural correlates of aerobic exercise have used simple stimuli in highly controlled and artificial experimental conditions, our everyday life experiences require a much more complex and dynamic neurocognitive processing. Therefore, we have used a naturalistic complex informationprocessing fMRI paradigm of story comprehension to investigate the role (...) of an aerobically active lifestyle in the processing of real-life cognitive-demanding situations. By employing the inter-subject correlation approach, we have identified differences in reliable stimulus-induced neural responses between groups of aerobically active and non-active cognitively intact older adults. Since cardiorespiratory fitness has previously been suggested to play a key role in the neuroprotective potential of aerobic exercise, we have investigated its dose-response relationship with regional inter-subject neural responses. We found that aerobically active lifestyle and cardiorespiratory fitness were associated with more synchronized inter-subject neural responses during story comprehension in higher order cognitive and linguistic brain regions in the prefrontal and temporo-parietal cortices. In addition, while higher regional inter-SC values were associated with higher performance on a post-listening memory task, this was not translated to a significant between-group difference in task performance. We, therefore, suggest that the modulatory potential of aerobic exercise and cardiorespiratory fitness on cognitive processing may extend beyond simple and highly controlled stimuli to situations in which the brain faces continuous real-life complex information. Additional studies incorporating other aspects of real-life situations such as naturalistic visual stimuli, everyday life decision making, and motor responses in these situations are desired to further validate the observed relationship between aerobic exercise, cardiorespiratory fitness, and complex naturalistic informationprocessing. (shrink)
Animals and robots perceiving and acting in a world require an ontology that accommodates entities, processes, states of affairs, etc., in their environment. If the perceived environment includes information - processing systems, the ontology should reflect that. Scientists studying such systems need an ontology that includes the first - order ontology characterising physical phenomena, the second - order ontology characterising perceivers of physical phenomena, and a third order ontology characterising perceivers of perceivers, including introspectors. We argue that second (...) - and third - order ontologies refer to contents of virtual machines and examine requirements for scientific investigation of combined virtual and physical machines, such as animals and robots. We show how the CogAff architecture schema, combining reactive, deliberative, and meta - management categories, provides a first draft schematic third - order ontology for describing a wide range of natural and artificial agents. Many previously proposed architectures use only a subset of CogAff, including subsumption architectures, contention - scheduling systems, architectures with Ôexecutive functionsÕ and a variety of types of ÔOmegaÕ architectures. Adding a multiply - connected, fastacting ÔalarmÕ mechanism within the CogAff framework accounts for several varieties of emotions. H - CogAff, a special case of CogAff, is postulated as a minimal architecture specification for a human - like system. We illustrate use of the CogAff schema in comparing H - CogAff with Clarion, a well known architecture. One implication is that reliance on concepts tied to observation and experiment can harmfully restrict explanatory theorising, since what an information processor is doing cannot, in general, be determined by using the standard observational techniques of the physical sciences or laboratory experiments. Like theoretical physics, cognitive science needs to be highly speculative to make progress. Ó 2004 Published by Elsevier B. V. (shrink)
Conventional methods of genetic engineering and more recent genome editing techniques focus on identifying genetic target sequences for manipulation. This is a result of historical concept of the gene which was also the main assumption of the ENCODE project designed to identify all functional elements in the human genome sequence. However, the theoretical core concept changed dramatically. The old concept of genetic sequences which can be assembled and manipulated like molecular bricks has problems in explaining the natural genome-editing competences (...) of viruses and RNA consortia that are able to insert or delete, combine and recombine genetic sequences more precisely than random-like into cellular host organisms according to adaptational needs or even generate sequences de novo. Increasing knowledge about natural genome editing questions the traditional narrative of mutations (error replications) as essential for generating genetic diversity and genetic content arrangements in biological systems. This may have far-reaching consequences for our understanding of artificial genome editing. (shrink)
From his preliminary analysis in 1965, Hubert Dreyfus projected a future much different than those with which his contemporaries were practically concerned, tempering their optimism in realizing something like human intelligence through conventional methods. At that time, he advised that there was nothing “directly” to be done toward machines with human-like intelligence, and that practical research should aim at a symbiosis between human beings and computers with computers doing what they do best, processing discrete symbols in (...) formally structured problem domains. Fast-forward five decades, and his emphasis on the difference between two essential modes of processing, the unconscious yet purposeful mode fundamental to situated human cognition, and the “minded” sense of conscious processing characterizing symbolic reasoning that seems to lend itself to explicit programming, continues into the famous Dreyfus–McDowell debate. The present memorial reviews Dreyfus’ early projections, asking if the fears that punctuate current popular commentary on AI are warranted, and in light of these if he would deliver similar practical advice to researchers today. (shrink)
One of the major contemporary challenges to Thomistic moral psychology is that it is incompatible with the most up-to-date psychological science. Here Thomistic psychology is in good company, targeted along with most virtue-ethical views by philosophical situationism, which uses replicated psychological studies to suggest that our behaviors are best explained by situational pressures rather than by stable traits (like virtues and vices). In this essay we explain how this body of psychological research poses a much deeper threat to Thomistic moral (...) psychology in particular. For Thomistic moral psychology includes descriptive claims about causal connections between certain cognitive processes and behaviors, even independent of whether those processes emerge from habits like virtues. Psychological studies of correlations between these can provide evidence against those causal claims. We offer a new programmatic response to this deeper challenge: empirical studies are relevant only if they investigate behaviors under intentional descriptions, such that the correlations discovered are between cognition and what Aquinas calls human acts. Psychological research on aggression already emphasizes correlations between cognition and intentional behavior, or human acts, and so is positioned to shed light on how well Thomistic moral psychology fits with empirical data. Surprisingly, Aquinas’s views have quite a lot in common with a leading model of aggression, the social informationprocessing (SIP) model. We close by suggesting how we might examine claims of Thomistic moral psychology from an empirical perspective further using research on social informationprocessing and aggression. (shrink)
This essay presents arguments for the claim that in the best of all possible worlds (Leibniz) there are sources of unpredictability and creativity for us humans, even given a pancomputational stance. A suggested answer to Chaitin’s questions: “Where do new mathematical and biological ideas come from? How do they emerge?” is that they come from the world and emerge from basic physical (computational) laws. For humans as a tiny subset of the universe, a part of the new ideas comes as (...) the result of the re-configuration and reshaping of already existing elements and another part comes from the outside world as a consequence of openness and interactivity of biological and cognitive systems. For the universe at large it is randomness that is the source of unpredictability on the fundamental level. In order to be able to completely predict the Universe-computer we would need the Universe-computer itself to compute its next state. As Chaitin demonstrated there are incompressible truths, which means truths that cannot be computed by any other computer but the universe itself. (shrink)
DePrince relates problems such as dissociation, revictimization and difficulties in social cognition. In particular, she states that individuals with dissociation or have been, by their own testimony, revictimized show obvious difficulties to solve selection tasks designed as social contracts or precautory problems. In my view, these facts mean that, if we accept the theories that there are mechanisms in human mind to regulate social exchanges and situations of risk, individuals with dissociation or revitimized may have altered such mechanisms. However, (...) these theories have been critized and we have no evidence that they are valid. Therefore, in this paper, not admitting the existence of these mechanisms, I try to show that DePrince’s outcomes can be interpreted from other perspectives. DePrince establece relaciones entre problemas como la disociación, la revictimización y ciertas dificultades en la cognición social. En concreto, afirma que las personas que padecen de disociación o que han sufrido, según su propio testimonio, episodios de revictimización manifiestan evidentes dificultades para resolver tareas de selección planteadas como contratos sociales o como problemas de precaución. En nuestra opinión, esto significa que, si aceptamos las teorías que defienden que existen mecanismos en la mente humana que regulan los intercambios sociales y las situaciones de riesgo, las personas que han sido diagnosticadas con disociación o que han sido revictimizadas pueden presentar alteraciones de tales mecanismos. No obstante, esas teorías han sido cuestionadas y no contamos con pruebas concluyentes de que sean válidas. Por ello, en este trabajo, no admitiendo la existencia de los mencionados mecanismos, intentamos mostrar que los resultados de DePrince pueden ser interpretados desde otras perspectivas. (shrink)
A novel social interaction is a dynamic process, in which participants adapt to, react to and engage with their social partners. To facilitate such interactions, people gather information relating to the social context and structure of the situation. The current study aimed to deepen the understanding of the psychological determinants of behavior in a novel social interaction. Three social robots and the participant interacted non-verbally according to a pre-programmed “relationship matrix” that dictated who favored whom. Participants' gaze was tracked (...) during the interaction and, using Bayesian inference models, resulted in a measure of participants' social information-gathering behaviors. Our results reveal the dynamics in a novel environment, wherein information-gathering behavior is initially predicted by psychological inflexibility and then, toward the end of the interaction, predicted by curiosity. These results highlight the utility of using social robots in behavioral experiments. (shrink)