Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and thirteen normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4 x 4 degree target, pseudo-randomly selected within a 26 (...) x 11 degree natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A Chi-square test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < .01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by peripheral visual field loss. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. (shrink)
Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs (...) while looking at it? The duration of a typical fixation (~170ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy. (shrink)
We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was (...) 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color. (shrink)
In the contextual cueing paradigm, incidental visual learning of repeated distractor configurations leads to faster search times in repeated compared to new displays. This contextual cueing is closely linked to the visual exploration of the search arrays as indicated by fewer fixations and more efficient scan paths in repeated search arrays. Here, we examined contextual cueing under impaired visual exploration induced by a simulated central scotoma that causes the participant to rely on extrafoveal vision. We let normal-sighted (...) participants search for the target either under unimpaired viewing conditions or with a gaze-contingent central scotoma masking the currently fixated area. Under unimpaired viewing conditions, participants revealed shorter search times and more efficient exploration of the display for repeated compared to novel search arrays and thus exhibited contextual cueing. When visual search was impaired by the central scotoma, search facilitation for repeated displays was eliminated. These results indicate that a loss of foveal sight, as it is commonly observed in maculopathies, e.g, may lead to deficits in high-level visual functions well beyond the immediate consequences of a scotoma. (shrink)
A set of visual search experiments tested the proposal that focused attention is needed to detect change. Displays were arrays of rectangles, with the target being the item that continually changed its orientation or contrast polarity. Five aspects of performance were examined: linearity of response, processing time, capacity, selectivity, and memory trace. Detection of change was found to be a self-terminating process requiring a time that increased linearly with the number of items in the display. Capacity for orientation was (...) found to be about 5 items, a value comparable to estimates of attentional capacity. Observers were able to filter out both static and dynamic variations in irrelevant properties. Analysis also indicated a memory for previously-attended locations. These results support the hypothesis that the process needed to detect change is much the same as the attentional process needed to detect complex static patterns. Interestingly, the features of orientation and polarity were found to be handled in somewhat different ways. Taken together, these results not only provide evidence that focused attention is needed to see change, but also show that change detection itself can provide new insights into the nature of attentional processing. (shrink)
This paper questions two prima facie plausible claims concerning switching in the presence of ambiguous figures. The first is the claim that reversing is an instantaneous process. The second is the claim that the ability to reverse demonstrates the interpretive, inferential and constructive nature of visual processing. Empirical studies show that optical and cerebral events related to switching protract in time in a way that clashes with its perceived instantaneity. The studies further suggest an alternative theory of reversing: according to (...) such alternative, seeing the same thing in multiple ways is a matter of uncovering what is already present to the senses through visual search. (shrink)
We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system (...) are mapped out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. (shrink)
We report on a new visual search task in which observers make highly accurate two-alternative forced-choice responses within 100-400 ms of display onset. This is a striking result, since accurate responding in a difficult search of this kind is usually possible only after at least 500 ms from display onset. The conditions under which such rapid responses are obtained involve brief initial glimpses of a search display interrupted by either a blank screen or a glimpse of a (...) second display. On re-presentation of the original display, a significant proportion of responses are made within 100-500 ms. Since these responses are never made in the absence of display re-presentation, they are evidence of "rapid resumption" of the search task. We report experiments exploring the conditions critical for rapid resumption and consider its implications for memorial processes in visual search. (shrink)
Previous theories of early vision have assumed that visual search is based on simple two-dimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on three-dimensional orientation of objects in the corresponding scene, provided that these objects are simple convex blocks. Direct comparison shows that image-based and scene-based orientation are similar in their ability to facilitate search. These findings support the hypothesis that scene-based properties (...) are represented at preattentive levels in early vision. (shrink)
The task of visual search is to determine as rapidly as possible whether a target item is present or absent in a display. Rapidly detected items are thought to contain features that correspond to primitive elements in the human visual system. In previous theories, it has been assumed that visual search is based on simple two-dimensional features in the image. However, visual search also has access to another level of representation, one that describes properties in the corresponding (...) three-dimensional scene. Among these properties are three dimensionality and the direction of lighting, but not viewing direction. These findings imply that the parallel processes of early vision are much more sophisticated than previously assumed. (shrink)
It has generally been assumed that parallel visual search can only be based on the presence of simple features -- the spatial relations between features do not influence this process. We describe a series of visual search experiments that contradict this assumption. Search for line drawings of opaque polyhedra is greatly influenced by some line relations. In particular, search is rapid for line drawings (i) that have arrow- and Y-junctions corresponding to corners formed from orthogonal surfaces, (...) and (ii) that do.. (shrink)
We describe an update to our visual search software for the Macintosh line of computers. The new software, VSearch Color, gives users access to the full-color capabilities of the Macintosh II line. One of the key features of the new software is its ability to treat graphics information separately from color information. This makes it easy to study color independently of form, to design experiments based on isoluminant stimuli, and to incorporate texture segregation, visual identification, number discrimination, adaptation, masking, (...) and spatial cuing into the basic visual search paradigm. (shrink)
Brain rhythms are more than just passive phenomena in visual cortex. For the first time, we show that the physiology underlying brain rhythms actively suppresses and releases cortical areas on a second-to-second basis during visual processing. Furthermore, their influence is specific at the scale of individual gyri. We quantified the interaction between broadband spectral change and brain rhythms on a second-to-second basis in electrocorticographic (ECoG) measurement of brain surface potentials in five human subjects during a visual search task. Comparison (...) of visual search epochs with a blank screen baseline revealed changes in the raw potential, the amplitude of rhythmic activity, and in the decoupled broadband spectral amplitude. We present new methods to characterize the intensity and preferred phase of coupling between broadband power and band-limited rhythms, and to estimate the magnitude of rhythm-to-broadband modulation on a trial-by-trial basis. These tools revealed numerous coupling motifs between the phase of low frequency (δ, θ, α, β, and γ band) rhythms and the amplitude of broadband spectral change. In the θ and β ranges, the coupling of phase to broadband change is dynamic during visual processing, decreasing in some occipital areas and increasing in others, in a gyrally specific pattern. Finally, we demonstrate that the rhythms interact with one another across frequency ranges, and across cortical sites. (shrink)
The ability to rapidly detect facial expressions of anger and threat over other salient expressions has adaptive value across the lifespan. Although studies have demonstrated this threat superiority effect in adults, surprisingly little research has examined the development of this process over the childhood period. In this study, we examined the efficiency of children's facial processing in visual search tasks. In Experiment 1, children (N=49) aged 8 to 11 years were faster and more accurate in detecting angry target faces (...) embedded in neutral backgrounds than vice versa, and they were slower in detecting the absence of a discrepant face among angry than among neutral faces. This search pattern was unaffected by an increase in matrix size. Faster detection of angry than neutral deviants may reflect that angry faces stand out more among neutral faces than vice versa, or that detection of neutral faces is slowed by the presence of surrounding angry distracters. When keeping the background constant in Experiment 2, children (N=35) aged 8 to 11 years were faster and more accurate in detecting angry than sad or happy target faces among neutral background faces. Moreover, children with higher levels of anxiety were quicker to find both angry and sad faces whereas low anxious children showed an advantage for angry faces only. Results suggest a threat superiority effect in processing facial expressions in young children as in adults and that increased sensitivity for negative faces may be characteristic of children with anxiety problems. (shrink)
Using visual search, functional magnetic resonance imaging (fMRI) and patient studies have demonstrated that medial temporal lobe (MTL) structures differentiate repeated from novel displays – even when observers are unaware of display repetitions. This suggests a role for MTL in both explicit and, importantly, implicit learning of repeated sensory information (Greene et al., 2007). However, recent behavioral studies suggest, by examining visual search and recognition performance concurrently, that observers have explicit knowledge of at least some of the repeated (...) displays (Geyer et al., 2010). The aim of the present fMRI study was thus to contribute new evidence regarding the contribution of MTL structures to explicit versus implicit learning in visual search. It was found that MTL activation was increased for explicit and, respectively, decreased for implicit relative to baseline displays. These activation differences were most pronounced in left anterior parahippocampal cortex, especially when observers were highly trained on the repeated displays. The data are taken to suggest that explicit and implicit memory processes are linked within MTL structures, but expressed via functionally separable mechanisms (repetition enhancement vs. -suppression). They further show that repetition effects in visual search would have to be investigated at the display level. (shrink)
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the dif®culty of the comparative visual search task by contrasting a mismatch detection task with a substantially more dif®cult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they (...) concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the dif®culty of the concurrent auditory task. Both the comparative search task dif®culty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size. q 2001 Elsevier Science B.V. All rights reserved. (shrink)
In this paper, I explore and defend the idea that we have epistemic responsibilities with respect to our visual searches, responsibilities that are far more fine-grained and interesting than the trivial responsibilities to keep our eyes open and “look hard”. In order to have such responsibilities, we must be able to exert fine-grained and interesting forms of control over our visual searches. I present both an intuitive case and an empirical case for thinking that we do, in fact, have such (...) forms of control over our visual searches. I then show how these forms of control can be used to aim the visual beliefs that result from our searches towards various epistemic goals. (shrink)
Abstnn Eye movements were monitored while subjects performed parallel and serial sarah tasks. In Experiment la, subjects searched for an “O' among "X"s (parallel condition) and for a 'T" among "L"s (serial condition). In the parallel condition of Eqcriment lb, “q)" was the target and “O"s were distractors; in the serial condition, time..
It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than (...) did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention. (shrink)
A series of visual search experiments conducted by Abrams et al. (2008) indicates that disengagement of visual attention is slowed when the array of objects that are to be searched are close to the hands (hands on the monitor) than if they are not close to the hands (hands in the lap). These experiments establish the impact one's hands can have on visual attentional processing. In the current paper we more closely examine these two hand postures with the goal (...) of pinpointing which characteristics are crucial for the observed differences in attentional processing. Specifically, in a set of 4 experiments we investigated additional hand postures and additional modes of response to address this goal. We replicated the original Abrams et al. (2008) effect when only the two original postures were used; however, surprisingly, the effect was extinguished with the new range of postures and response modes, and this extinction persisted across different populations (German and English students), and different experimental hardware. Furthermore, analyses indicated that it is unlikely that the extinction of the effect was caused by increased practice due to additional blocks of trials or by an increased probability that participants were able to guess the purpose of the experiment. As such our results suggest that in addition to the nature of the postures of the hand, the number of postures is a further important factor that influences the impact the hands have on visual processing. (shrink)
Rapid visual flicker is known to capture attention. Here we show slow flicker can also capture attention under reciprocal temporal conditions. Observers searched for a target line (vertical or horizontal) among tilted distractors. Distractor lines were surrounded by luminance modulating annuli, all flickering sinusoidally at 1.3 or 12.1 Hz, while the target’s annulus flickered at frequencies within this range. Search times improved with increasing target/distractor frequency differences. For target-distractor frequency separations > 5 Hz reaction times were minimal with high (...) frequency targets correctly identified more rapidly than low frequency targets (~400ms). Critically, however, at these optimal frequency separations search times for low and high frequency targets were unaffected by set size (slow flicker popped out from high flicker, and vice versa), indicating parallel and symmetric search performance when searching for high or low frequency targets. In a ‘cost’ experiment using 1.3 and 12.1 Hz flicker, the unique flickering annulus sometimes surrounded a distractor and, on other trials, surrounded the target. When centred on a distractor, the unique frequency produced a clear and symmetrical search cost. Together, these symmetric pop-out and search costs demonstrate that temporal frequency is a pre-attentive visual feature capable of capturing attention, and that it is relative rather than absolute frequencies that are critical. The shape of the search functions strongly suggest that early visual temporal frequency filters underlie these effects. (shrink)
Observations on patients who lost visual imagery after brain damage call into question the notion that the knowledge subserving visual imagery is “tacit.” Dissociations between deficient imagery and preserved recognition of objects suggest that imagery is exclusively based on explicit knowledge, whereas retrieval of “tacit” visual knowledge is bound to the presence of the object and the task of recognizing it.
The sensorimotor account of perception is akin to Gibsonian direct realism. Both emphasize external properties of the world, challenging views based on the analysis of internal visual processing. To compare the role of distal and retinotopic parameters, distractor effect – an optomotor reaction of midbrain origin – is considered. Even in this case, permanence in the environment, not on the retina, explains the dynamics of habituation.
The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs (...) through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom. (shrink)
When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior towards reporting the redundant stimuli. This bias would therefore be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is (...) not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks) while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words), purely perceptual (colors), or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction time were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference), but showed surprisingly little influence of visual feature redundancy. Post hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by redundancy effect for behavioral facilitation or for attended visual targets. (shrink)
We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate (...) the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming) and if they were near the high probability hotspot (probability cuing). In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features. (shrink)
The role of the artist's intention in the interpretation of art has been the topic of a lively and ongoing discussion in analytic aesthetics. First, I sketch the current state of this debate, focusing especially on two competing views: actual and hypothetical intentionalism. Secondly, I discuss the search for a suitable test case, that is, a work of art that is interpreted differently by actual and hypothetical intentionalists, with only one of these interpretations being plausible. Many examples from many (...) different art forms have been considered in this respect, but none of these test cases has proved convincing. Thirdly, I introduce two new test cases taken from contemporary visual art. I explain why these examples are better suited as test cases and how they lend support to the actual intentionalist position. (shrink)
This paper argues that a theory of situated vision, suited for the dual purposes of object recognition and the control of action, will have to provide something more than a system that constructs a conceptual representation from visual stimuli: it will also need to provide a special kind of direct (preconceptual, unmediated) connection between elements of a visual representation and certain elements in the world. Like natural language demonstratives (such as `this' or `that') this direct connection allows entities to be (...) referred to without being categorized or conceptualized. Several reasons are given for why we need such a preconcep- tual mechanism which individuates and keeps track of several individual objects in the world. One is that early vision must pick out and compute the relation among several individual objects while ignoring their properties. Another is that incrementally computing and updating representations of a dynamic scene requires keeping track of token individuals despite changes in their properties or locations. It is then noted that a mechanism meeting these requirements has already been proposed in order to account for a number of disparate empiri- cal phenomena, including subitizing, search-subset selection and multiple object tracking (Pylyshyn et al., Canadian Journal of Experimental Psychology 48(2) (1994) 260). This mechanism, called a visual index or FINST, is brie. (shrink)
Evidence from many different paradigms (e.g. change blindness, inattentional blindness, transsaccadic integration) indicate that observers are often very poor at reporting changes to their visual environment. Such evidence has been used to suggest that the spatio-temporal coherence needed to represent change can only occur in the presence of focused attention. In four experiments we use modified change blindness tasks to demonstrate (a) that sensitivity to change does occur in the absence of awareness, and (b) this sensitivity does not rely on (...) the redeploy- ment of attention. We discuss these results in relation to theories of scene percep- tion, and propose a reinterpretatio n of the role of attention in representing change. (shrink)
Argues for a category of “cognitive feelings”, which are representationally significant, but are not part of the content of the states they accompany. The feeling of pastness in episodic memory, of familiarity (missing in Capgras syndrome), and of motivation (that accompanies desire) are examples. The feeling of presence that accompanies normal visual states is due to such a cognitive feeling; the “two visual systems” are partially responsible for this feeling.