Methods. We employed a "flicker" technique, in which an original and a modified image (each of duration 240 ms) continually alternated, with a blank field (duration 80 ms) between each display. Images were all of real-world scenes. One of three kinds of change (appearance/disappearance, color, or translation) was made to an object or region in each scene. Changes were large and easily seen under normal conditions. Subjects viewed the flicker display, and pressed a key when they noticed the change.
Change blindness is the striking failure to see large changes that normally would be noticed easily. Over the past decade this phenomenon has greatly contributed to our understanding of attention, perception, and even consciousness. The surprising extent of change blindness explains its broad appeal, but its counterintuitive nature has also engendered confusions about the kinds of inferences that legitimately follow from it. Here we discuss the legitimate and the erroneous inferences that have been drawn, and offer a set of requirements (...) to help separate them. In doing so, we clarify the genuine contributions of change blindness research to our understanding of visual perception and awareness, and provide a glimpse of some ways in which change blindness might shape future research. (shrink)
Five aspects of visual change detection are reviewed. The first concerns the concept of _change_ itself, in particular the ways it differs from the related notions of _motion_ and _difference_. The second involves the various methodological approaches that have been developed to study change detection; it is shown that under a variety of conditions observers are often unable to see large changes directly in front of them. Next, it is argued that this "change blindness" indicates that focused attention is needed (...) to detect change, and that this can help map out the nature of visual attention. The fourth aspect concerns how these results affect our understanding of visual perceptionfor example, the proposal that a sparse, dynamic representation underlies much of our visual experience. Finally, a brief discussion is presented concerning the limits to our current understanding of change detection. (shrink)
One of the more powerful impressions created by vision is that of a coherent, richly-detailed world where everything is present simultaneously. Indeed, this impression is so compelling that we tend to ascribe these properties not only to the external world, but to our internal representations as well. But results from several recent experiments argue against this latter ascription. For example, changes in images of real-world scenes often go unnoticed when made during a saccade, flicker, blink, or movie cut. This "change (...) blindness" provides strong evidence against the idea that our brains contain a picture-like representation of the scene that is everywhere detailed and coherent. (shrink)
Large changes in a scene often become difficult to notice if made during an eye movement, image flicker, movie cut, or other such disturbance. It is argued here that this _change blindness_ can serve as a useful tool to explore various aspects of vision. This argument centers around the proposal that focused attention is needed for the explicit perception of change. Given this, the study of change perception can provide a useful way to determine the nature of visual attention, and (...) to cast new light on the way that it is?and is not?involved in visual perception. To illustrate the power of this approach, this paper surveys its use in exploring three different aspects of vision. The first concerns the general nature of _seeing_. To explain why change blindness can be easily induced in experiments but apparently not in everyday life, it is proposed that perception involves a _virtual representation_, where object representations do not accumulate, but are formed as needed. An architecture containing both attentional and nonattentional streams is proposed as a way to implement this scheme. The second aspect concerns the ability of observers to detect change even when they have no visual experience of it. This _sensing_ is found to take on at least two forms: detection without visual experience (but still with conscious awareness), and detection without any awareness at all. It is proposed that these are both due to the operation of a nonattentional visual stream. The final aspect considered is the nature of visual attention itself?the mechanisms involved when _scrutinizing_ items. Experiments using controlled stimuli show the existence of various limits on visual search for change. It is shown that these limits provide a powerful means to map out the attentional mechanisms involved. (shrink)
When brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: the changes become extremely difficult to notice, even when they are large, presented repeatedly, and the observer expects them to occur (Rensink, O'Regan, & Clark, 1997). To determine the mechanisms behind this induced "change blindness", four experiments examine its dependence on initial preview and on the nature of the interruptions used. Results support the proposal that representations at (...) the early stages of visual processing are highly volatile, and that focused attention is needed to stabilize them sufficiently to support the perception of change. (shrink)
Ideomotor actions are behaviours that are unconsciously initiated and express a thought rather than a response to a sensory stimulus. The question examined here is whether ideomotor actions can also express nonconscious knowledge. We investigated this via the use of implicit long-term semantic memory, which is not available to conscious recall. We compared accuracy of answers to yes/no questions using both volitional report and ideomotor response . Results show that when participants believed they knew the answer, responses in the two (...) modalities were similar. But when they believed they were guessing, accuracy was at chance for volitional report , but significantly higher for Ouija response . These results indicate that implicit semantic memory can be expressed through ideomotor actions. They also suggest that this approach can provide an interesting new methodology for studying implicit processes in cognition. (shrink)
Scene Perception is the visual perception of an environment as viewed by an observer at any given time. It includes not only the perception of individual objects, but also such things as their relative locations, and expectations about what other kinds of objects might be encountered. Given that scene perception is so effortless for most observers, it might be thought of as something easy to understand. However, the amount of effort required by a process often bears little relation to its (...) underlying complexity. A closer look shows that scene perception is a highly complex activity, and that any account of it must deal with several difficult issues: What exactly is a scene? What aspects of it do we represent? And what are the processes involved? Finding the answers to these questions has proven to be extraordinarily difficult. However, answers are being found, and a general understanding of scene perception is beginning to emerge. Interestingly, this emerging picture shows that much of our subjective experience as observers is highly misleading, at least in regards to the way that scene perception is carried out. In particular, the impression of a stable picture-like representation somewhere in our heads turns out to be largely an illusion. To see how this comes about, imagine a seashore where there is a sailboat, some rocks, some clouds, and perhaps a few other objects (see Figure 1). How do we perceive this scene? Intuitively, it seems that the set of objects in the environment would give rise to a corresponding set of representations in the observer. Thus, there would be detailed representations of the sailboat, clouds, etc., with each representation describing the identity, location, and 'meaning' of the item it refers to. In this view, the goal of scene perception is to form a literal re-presentation of the world, with all of its visible structure represented concurrently and in great detail everywhere. This representation then serves as the basis for all subsequent visual processing.. (shrink)
This past decade has seen a great resurgence of interest in the perception of change. Change has, of course, long been recognized as a phenomenon worthy of study, and vision scientists have given their attention to it at various times in the past (for a review, see Rensink, 2002a). But things seem different this time around. This time, there is an emerging belief that instead of being just another visual ability, the perception of change may be something central to our (...) ‘visual life’, and that the mechanisms that underlie it may provide considerable insight into the operation of much of our visual system. This development may have been sparked by a number of factors: technology that allowed the easy creation of dynamic displays, a feeling in the air that it was time for something new, or it may have simply been a matter of chance. But once underway, this development was fueled by results, results that included both novel behavioral effects and new theoretical insights. Many of these centered around change blindness 1, the failure of observers to see large changes that are made contingent upon.. (shrink)
We present a rigorous way to evaluate the visual perception of correlation in scatterplots, based on classical psychophysical methods originally developed for simple properties such as brightness. Although scatterplots are graphically complex, the quantity they convey is relatively simple. As such, it may be possible to assess the perception of correlation in a similar way.
G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by (...) ignoring important aspects of V. Di Lollo et al. 's results. (shrink)
Recent advances in our understanding of visual perception have shown it to be a far more complex and counterintuitive process than previously believed. Several important consequences follow from this. First, the design of an effective statistical graphics system is unlikely to succeed based on intuition alone; instead, it must rely on a more sophisticated, systematic approach. The basic elements of such an approach are outlined here, along with several design principles. An overview is then given of recent advances in our (...) understanding of visual perception, including rapid perception, visual attention, and scene perception. It then is argued that the mechanisms involved can be successfully harnessed to allow data to be displayed more effectively than at present. Several directions of development are discussed, including effective use of visual attention, the display of dynamic information, and the effective use of nonattentional and nonconscious perceptual systems. (shrink)
Stephen Few provides a nice overview of the reasons why we should design data visualizations to be effective, and why it’s important to understand human perception when doing so. In fact, he’s done this so well that I can’t add much to his arguments. But I can, however, push the basic message a bit further, out into the times before and after those he discusses. Out into areas that are not as well known, or not really developed, where new opportunities (...) and new dangers may lie… Perhaps the best place to begin is the beginning. Discussing the beginning of visualization is not without its problems, if only for the fact that there exist several different kinds of visualization—for example, data visualization, information visualization, and scientific visualization. But whatever adjective used, we generally find a history more extensive than commonly imagined. For example, although Descartes did contribute to the graphic display of quantitative data in the 17 th century, graphs had already been used to represent things such as temperature and light intensity three centuries earlier. Indeed, as Manfredo Massironi discusses in his book (Massironi, 2002; p. 131), quantities such as displacement were graphed as a function of time as far back as the 11 th century. (shrink)
The capacity of visual attention/STM can be determined by change-detection experiments. Detecting the presence of change leads to an estimate of 4 items, while detecting the absence of change leads to an estimate of 1 item. Thus, there are two magical numbers in vision: 4 and 1. The underlying limits, however, are not necessarily those of central STM.
Knill, Kersten, & Mamassian (Chapter 6) provide an interesting discussion of how the Bayesian formulation can be used to help investigate human vision. In their view, computational theories can be based on an ideal observer that uses Bayesian inference to make optimal use of available information. Four factors are important here: the image information used, the output structures estimated, the priors assumed (i.e., knowledge about the structure of the world), and the likelihood function used (i.e., knowledge about the projection of (...) the world onto the sensors). Knill & Kersten argue that such a framework not only helps analyze a perceptual task, but can also help investigators to define it. Two examples are provided (the interpretation of surface contour and the perception of moving shadows) to show how this approach can be used in practice. As the authors admit, most (if not all) perceptual processes are ill-suited to a "strong" Bayesian approach based on a single consistent model of the world. Instead, they argue for a "weak" variant that assumes Bayesian inference to be carried out in modules of more limited scope. But how weak is "weak"? Are such approaches suitable for only a few relatively low-level tasks, or can they be applied more generally? Could a weak Bayesian approach, for example, explain how we would recognize the return of Elvis Presley? The formal modelling of human perception To help get a fix on things, it is useful to examine the fate of an earlier attempt to formalize human perception: the application of information theory. It was once hoped that this theory—a close cousin of the Bayesian formulation—would provide a way to uncover information-handling laws that were largely independent of physical implementation. In this approach, the human nervous system was assumed to have.. (shrink)
An overview is presented of the ways that change blindness has been applied to the study of various issues in perception and cognition. Topics include mechanisms of change perception, allocation of attention, nonconscious perception, and cognitive beliefs. Recent work using change blindness to investigate these topics is surveyed, along with a brief discussion of some of the ways that these approaches may further develop over the next few years.
This chapter presents an overview of several recent developments in vision science, and outlines some of their implications for the management of visual attention in graphic displays. These include ways of sending attention to the right item at the right time, techniques to improve attentional efficiency, and possibilities for offloading some of the processing typically done by attention onto nonattentional mechanisms. In addition it is argued that such techniques not only allow more effective use to be made of visual attention, (...) but also open up new possibilities for human-machine interaction. (shrink)
While some studies suggest cultural differences in visual processing, others do not, possibly because the complexity of their tasks draws upon high-level factors that could obscure such effects. To control for this, we examined cultural differences in visual search for geometric figures, a relatively simple task for which the underlying mechanisms are reasonably well known. We replicated earlier results showing that North Americans had a reliable search asymmetry for line length: Search for long among short lines was faster than vice (...) versa. In contrast, Japanese participants showed no asymmetry. This difference did not appear to be affected by stimulus density. Other kinds of stimuli resulted in other patterns of asymmetry differences, suggesting that these are not due to factors such as analytic/holistic processing but are based instead on the target-detection process. In particular, our results indicate that at least some cultural differences reflect different ways of processing early-level features, possibly in response to environmental factors. (shrink)