Stephen Few provides a nice overview of the reasons why we should design data visualizations to be effective, and why it’s important to understand human perception when doing so. In fact, he’s done this so well that I can’t add much to his arguments. But I can, however, push the basic message a bit further, out into the times before and after those he discusses. Out into areas that are not as well known, or not really developed, where new opportunities (...) and new dangers may lie… Perhaps the best place to begin is the beginning. Discussing the beginning of visualization is not without its problems, if only for the fact that there exist several different kinds of visualization—for example, data visualization, information visualization, and scientific visualization. But whatever adjective used, we generally find a history more extensive than commonly imagined. For example, although Descartes did contribute to the graphic display of quantitative data in the 17 th century, graphs had already been used to represent things such as temperature and light intensity three centuries earlier. Indeed, as Manfredo Massironi discusses in his book (Massironi, 2002; p. 131), quantities such as displacement were graphed as a function of time as far back as the 11 th century. (shrink)
Although much of vision appears to be effortless and all-encompassing, there nevertheless exist limits to what it can do. Consider, for example, air traffic control, where it is imperative to keep track of all moving items in a display (corresponding to the airplanes in an airspace). If only a single item is present, it can generally be tracked without problem. It is also possible to track four or five items simultaneously, although some effort is needed. But for twenty or thirty (...) items, even a maximal effort will not suffice, and the task must be shared among several controllers. What appears to be happening in such cases is that visual perception is constrained by a consciously-controlled factor within the observer, a factor that enables certain kinds of processing to take place, but which is limited in the extent to which it can be applied. This factor is referred to as visual attention. (shrink)
In this paper we examine to what extent the lengths of the links in an animated articulated figure can be changed without the viewer being aware of the change. This is investigated in terms of a framework that emphasizes the role of attention in visual perception. We conducted a set of five experiments to establish bounds for the sensitivity to changes in length as a function of several parameters and the amount of attention available. We found that while length changes (...) of 3% can be perceived when the relevant links are given full attention, changes of over 20% can go unnoticed when attention is not focused in this way. These results provide general guidelines for algorithms that produce or process character motion data and also bring to light some of the potential gains that stand to be achieved with attention-based algorithms. (shrink)
Theories of human vision have generally assumed that the features underlying visual search and texture segmentation correspond to simple measurements made at the first stages of visual processing. In this paper, we describe a series of visual search experiments that refute this assumption. Using several variants of the Mueller-Lyer figure, we show that an illusion of length exists in preattentive vision -- search is easy when items contain line segments of equal length, but becomes difficult when these segments are adjusted (...) to have the same apparent length. This illusion cannot be reduced by selective inhibition of features, such as that used to facilitate the rapid detection of feature conjunctions. For example, subjects are unable to ignore the wings when making judgements of the test line, even when it is advantageous to do so. This rules out explanations based on interactions among the features themselves. We also show that spatial filtering cannot account for this illusion, since these effects are indifferent to the sign of contrast of the line segments and can occur for textured lines having the same first-order statistics. The illusion, however, can be explained by a model in which line length is determined via grouping operations acting at a level prior to the formation of preattentive features. (shrink)
A new proof is presented of Tsotsos' result that the VISUAL MATCH problem is NP-complete when no (high-level) constraints are imposed on the search space. Like the proof given by Tsotsos, it is based on the polynomial reduction of the NP-complete problem KNAPSACK to VISUAL MATCH. Tsotsos' proof, however, involves limited-precision real numbers, which introduces an extra degree of complexity to his treatment. The reduction of KNAPSACK to VISUAL MATCH presented here makes no use of limited-precision numbers, leading to a (...) simpler and more direct proof of the result. (shrink)
It has recently been demonstrated that early vision is capable of recovering several properties of the three-dimensional world. We describe a series of visual search experiments showing that such recovery includes a completion process that allows for the interpretation of objects that are partially occluded. Search for easily-detectable line segments is made much more difficult when they can be interpreted as the visible parts of a line that has been occluded by a three-dimensional object. We describe some of the conditions (...) under which this completion process takes place, such as its dependence on orientation, contrast, and spacing. We then show that fragments of three-dimensional objects can be completed in a similar way. These results extend what is known about rapid parallel scene interpretation -- in addition to assigning scene-based properties to image elements, early vision also constructs elements not present in the original image. (shrink)
Focused attention is needed to perceive change (Rensink et al., 1997; Psychological Science, 8: 368-373) . But how much attentional processing is given to an item? And does this depend on the nature of the task?
Large changes that occur in clear view of an observer can become difficult to notice if made during an eye movement, blink, or other such disturbance. This change blindness is consistent with the proposal that focused visual attention is necessary to see change, with a change becoming difficult to notice whenever conditions prevent attention from being automatically drawn to it.
Studies of eye movements, on memory for scenes, and on visual attention, have for a long time proceeded in a fairly independent way. But in recent years it is becoming apparent that the three disciplines have something to be gained from exchanging ideas: Scene knowledge (and therefore scene memory) originates in eye exploration, but certainly also..
A striking blindness to changes in real-world scenes can be induced using a variety of techniques (e.g., saccade-, blink-, or flicker-contingent change). The strength and robustness of this phenomenon points towards the involvement of mechanisms central to visual perception. It is proposed here that this induced change blindness can be explained by an..
Several recent investigations (Grimes, in press; McConkie and Currie, in preparation) report that large changes in images of natural scenes can remain unnoticed if these are made during saccades. We show here that similar massive effects can be obtained without synchronization to saccades. This is done via a "flicker" technique in which an original and an altered image (each of duration 240 ms) are repetitively alternated, with a blank field (duration 27 or 290 ms) between each display. One of four (...) kinds of change (color, left-right reflection, translation, or appearance/disappearance) were made in the foreground or background of each scene. Many of these changes were difficult to detect, even over long periods of observation (35 seconds). We believe that this is due to the spatially-distributed transient induced by the blank field, which swamps the localized flash that would otherwise draw attention to the changes; observers were therefore forced to rely on higher-level (probably non-iconic) representations of the scenes to detect the change. Our results indicate that the failure to notice scene changes during saccades is not due to saccade-specific mechanisms, but rather, involves more general mechanisms of visual attention. (shrink)
Methods. Visual search experiments were carried out using simple black and white figures corresponding to shiny objects lit from various directions. These included, for example, depictions of cylinders with highlights positioned at various heights (see figure). Targets and distractors differed only in the arrangement of their constituent regions, allowing them to be distinguished by the position of the highlights on the corresponding objects.
The task of visual search is to determine as rapidly as possible whether a target item is present or absent in a display. Rapidly detected items are thought to contain features that correspond to primitive elements in the human visual system. In previous theories, it has been assumed that visual search is based on simple two-dimensional features in the image. However, visual search also has access to another level of representation, one that describes properties in the corresponding three-dimensional scene. Among (...) these properties are three dimensionality and the direction of lighting, but not viewing direction. These findings imply that the parallel processes of early vision are much more sophisticated than previously assumed. (shrink)
One of the more compelling beliefs about vision is that it is based on representations that are coherent and complete, with everything in the visual field described in great detail. However, changes made during a visual disturbance are found to be difficult to see, arguing against the idea that our brains contain a detailed, picture-like representation of the scene. Instead, it is argued here that a more dynamic, "just-in-time" representation is involved, one with deep similarities to the way that users (...) interact with external displays. It is further argued that these similarities can provide a basis for the design of intelligent display systems that can interact with humans in highly effective and novel ways. (shrink)
In this paper we examine to what extent the lengths of the links in an animated articulated figure can be changed without the viewer being aware of the change. This is investigated in terms of a framework that emphasizes the role of attention in visual perception. We conducted a set of five experiments to establish bounds for the sen-sitivity to changes in length as a function of several parameters and the amount of attention available. We found that while length changes (...) of 3% can be perceived when the relevant links are given full attention, changes of over 20% can go unnoticed when attention is not focused in this way. These results provide general guidelines for algorithms that produce or process character motion data and also bring to light some of the potential gains that stand to be achieved with attention-based algorithms. (shrink)
How traditional human-computer-interaction methodologies augmented with theories and experimental findings from cognitive science address challenges posed by multimodal interaction using vision, haptics, and sound in conventional and immersive computer graphics environments. Attendees learn the theory and practice of multimodal interaction design in a multidisciplinary setting.
When brief blank fields are placed between alternating displays of an original and a modified scene, a striking form of "change blindness" is induced, where the changes are difficult to see (Rensink, O'Regan, and Clark, 1997). Experiments are presented here examining the dependence of this phenomenon on initial preview and type of transient caused by the blanks. Results support the idea that our representation of the world is a sparse one, coordinated by attentional mechanisms.
This work investigates the ability of the human visual system to discriminate self-similar Gaussian random textures. The power spectra of such textures are similar to themselves when rescaled by some factor h > 1. As such, these textures provide a natural domain for testing the hypothesis that texture perception is based on a set of spatial-frequency channels characterized by filters of similar shape.
Focus+Context techniques are commonly used in visualization systems to simultaneously provide both the details and the context of a particular dataset. This paper proposes a new methodology to empirically investigate the effect of various Focus+Context transformations on human perception. This methodology is based on the shaker paradigm, which tests performance for a visual task on an image that is rapidly alternated with a transformed version of itself. An important aspect of this technique is that it can determine two different kinds (...) of perceptual cost: (i) the effect on the perception of a static transformed image, and (ii) the effect of the dynamics of the transformation itself. This technique has been successfully applied to determine the extent to which human perception is invariant to scaling and rotation [Rensink 2004]. In this paper, we extend this approach to examine nonlinear fisheye transformations of the type typically used in a Focus+Context system. We show that there exists a no-cost zone where performance is unaffected by an abrupt, noticeable fisheye transformation, and that its extent can be determined. The lack of perceptual cost in regards to these sudden changes contradicts the belief that they are necessarily detrimental to performance, and suggests that smoothly animated transformations between visual states are not always necessary. We show that this technique also can map out low-cost zones where transformations result in only a slight degradation of performance. Finally, we show that rectangular grids have no positive effect on performance, acting only as a form of visual clutter. These results therefore demonstrate that the perceptual costs of nonlinear transformations can be successfully quantified. Interestingly, they show that some kinds of sudden transformation can be experienced with minimal or no perceptual cost. This contradicts the belief that sudden changes are necessarily detrimental to performance, and suggests that smoothly animated transformations between visual states are not always necessary.. (shrink)
Recent studies have shown that several scene-based properties can be determined rapidly and in parallel at preattentive levels, including surface convexity and concavity (Ramachandran, 1988), direction of illumination (Enns & Rensink, 1990), and three-dimensional orientation (Enns & Rensink, 1991). We show that in addition to these properties, preattentive vision is also sensitive to scene structure defined by shadows.
It has generally been assumed that rapid visual search is based on simple features and that spatial relations between features are irrelevant for this task. Seven experiments involving search for line drawings contradict this assumption; a major determinant of search is the presence of line junctions. Arrow- and Y-junctions were detected rapidly in isolation and when they were embedded in drawings of rectangular polyhedra. Search for T-junctions was considerably slower. Drawings containing T-junctions often gave rise to very slow search even (...) when distinguishing arrow- or Y-junctions were present. This sensitivity to line relations suggests that preattentive processes can extract 3-dimensional orientation from line drawings. A computational model is outlined for how this may be accomplished in early human vision. (shrink)
We report on a new visual search task in which observers make highly accurate two-alternative forced-choice responses within 100-400 ms of display onset. This is a striking result, since accurate responding in a difficult search of this kind is usually possible only after at least 500 ms from display onset. The conditions under which such rapid responses are obtained involve brief initial glimpses of a search display interrupted by either a blank screen or a glimpse of a second display. On (...) re-presentation of the original display, a significant proportion of responses are made within 100-500 ms. Since these responses are never made in the absence of display re-presentation, they are evidence of "rapid resumption" of the search task. We report experiments exploring the conditions critical for rapid resumption and consider its implications for memorial processes in visual search. (shrink)
A general treatment of stationary Gaussian fractals is presented. Relations are established between the fractal properties of an n-dimensional random field and the form of its correlation function and power spectrum. These relations are used to show that the second-order parameter H commonly used to describe fractal texture is insufficient to characterize all fractal aspects of the field. A larger set of measures -- based on the power spectrum -- is shown to provide a more complete description of fractal texture.
Previous theories of early vision have assumed that visual search is based on simple two-dimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on three-dimensional orientation of objects in the corresponding scene, provided that these objects are simple convex blocks. Direct comparison shows that image-based and scene-based orientation are similar in their ability to facilitate search. These findings support the hypothesis that scene-based properties are represented at (...) preattentive levels in early vision. (shrink)
This paper explores the ways in which resource limitations influence the nature of perceptual and cognitive processes. A framework is developed that allows early visual processing to be analyzed in terms of these limitations. In this approach, there is no one ``best'' system for any visual process. Rather, a spectrum of systems exists, differing in the particular trade-offs made between performance and resource requirements.
We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped (...) out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. (shrink)
It has generally been assumed that parallel visual search can only be based on the presence of simple features -- the spatial relations between features do not influence this process. We describe a series of visual search experiments that contradict this assumption. Search for line drawings of opaque polyhedra is greatly influenced by some line relations. In particular, search is rapid for line drawings (i) that have arrow- and Y-junctions corresponding to corners formed from orthogonal surfaces, and (ii) that do..
Recent developments in vision science have resulted in several major changes in our understanding of human visual perception. For example, attention no longer appears necessary for "visual intelligence"--a large amount of sophisticated processing can be done without it. Scene perception no longer appears to involve static, general-purpose descriptions, but instead may involve dynamic representations whose content depends on the individual and the task. And vision itself no longer appears to be limited to the production of a conscious "picture"--it may also (...) guide processes outside the conscious awareness of the observer. (shrink)
Pascal routines are described for performing and testing various timing and display operations on Macintosh computers. Millisecond timing of internal operations is described, as is a method to time inputs more accurately than tick timing. Techniques are also presented for placing arbitrary bit-image displays on the screen within one screen refresh. All routines are based on Toolbox procedures applicable to the entire range of Macintosh computers.
A computational theory is developed that explains how line drawings of polyhedral objects can be interpreted rapidly and in parallel at early levels of human vision. The key idea is that a time-limited process can correctly recover much of the three-dimensional structure of these objects when split into concurrent streams, each concerned with a single aspect of scene structure.
Purpose. Although observers easily extract the global meaning of natural scenes, it is often the case that they do not notice or remember all of their individual properties. It appears that some scene properties are more readily coded in mental representations than others. We tested the role of three different object properties - color, location, and presence/absence - in scene representations.
Adelson & Pentland (Chapter 11) use an engaging metaphor to illustrate their position on scene analysis: interpretations are produced by a workshop that employs a set of specialists, each concerned with a single aspect of the scene. The authors argue that it is too expensive to have a supervisor co-ordinate the specialists and that it is too expensive to let them operate independently. They then show that a careful sequencing of the specialists leads to solutions of minimum cost, at least (...) for their world of Mondrian panels. (shrink)
We describe an update to our visual search software for the Macintosh line of computers. The new software, VSearch Color, gives users access to the full-color capabilities of the Macintosh II line. One of the key features of the new software is its ability to treat graphics information separately from color information. This makes it easy to study color independently of form, to design experiments based on isoluminant stimuli, and to incorporate texture segregation, visual identification, number discrimination, adaptation, masking, and (...) spatial cuing into the basic visual search paradigm. (shrink)
This past decade has seen a great resurgence of interest in the perception of change. Change has, of course, long been recognized as a phenomenon worthy of study, and vision scientists have given their attention to it at various times in the past (for a review, see Rensink, 2002a). But things seem different this time around. This time, there is an emerging belief that instead of being just another visual ability, the perception of change may be something central to our (...) ‘visual life’, and that the mechanisms that underlie it may provide considerable insight into the operation of much of our visual system. This development may have been sparked by a number of factors: technology that allowed the easy creation of dynamic displays, a feeling in the air that it was time for something new, or it may have simply been a matter of chance. But once underway, this development was fueled by results, results that included both novel behavioral effects and new theoretical insights. Many of these centered around change blindness 1, the failure of observers to see large changes that are made contingent upon.. (shrink)
Advances in neuroscience implicate reentrant signaling as the predominant form of communication between brain areas. This principle was used in a series of masking experiments that defy explanation by feed-forward theories. The masking occurs when a brief display of target plus mask is continued with the mask alone. Two masking processes were found: an early process affected by physical factors such as adapting luminance and a later process affected by attentional factors such as set size. This later process is called (...) masking by object substitution, because it occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity. Iterative reentrant processing was formalized in a computational model that provides an excellent fit to the data. The model provides a more comprehensive account of all forms of visual masking than do the long-held feed-forward views based on inhibitory contour interactions. (shrink)
Knill, Kersten, & Mamassian (Chapter 6) provide an interesting discussion of how the Bayesian formulation can be used to help investigate human vision. In their view, computational theories can be based on an ideal observer that uses Bayesian inference to make optimal use of available information. Four factors are important here: the image information used, the output structures estimated, the priors assumed (i.e., knowledge about the structure of the world), and the likelihood function used (i.e., knowledge about the projection of (...) the world onto the sensors). Knill & Kersten argue that such a framework not only helps analyze a perceptual task, but can also help investigators to define it. Two examples are provided (the interpretation of surface contour and the perception of moving shadows) to show how this approach can be used in practice. As the authors admit, most (if not all) perceptual processes are ill-suited to a "strong" Bayesian approach based on a single consistent model of the world. Instead, they argue for a "weak" variant that assumes Bayesian inference to be carried out in modules of more limited scope. But how weak is "weak"? Are such approaches suitable for only a few relatively low-level tasks, or can they be applied more generally? Could a weak Bayesian approach, for example, explain how we would recognize the return of Elvis Presley? The formal modelling of human perception To help get a fix on things, it is useful to examine the fate of an earlier attempt to formalize human perception: the application of information theory. It was once hoped that this theory—a close cousin of the Bayesian formulation—would provide a way to uncover information-handling laws that were largely independent of physical implementation. In this approach, the human nervous system was assumed to have.. (shrink)
G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by (...) ignoring important aspects of V. Di Lollo et al. 's results. (shrink)
An overview is presented of the ways that change blindness has been applied to the study of various issues in perception and cognition. Topics include mechanisms of change perception, allocation of attention, nonconscious perception, and cognitive beliefs. Recent work using change blindness to investigate these topics is surveyed, along with a brief discussion of some of the ways that these approaches may further develop over the next few years.
It has often been assumed that when we use vision to become aware of an object or event in our surroundings, this must be accompanied by a corresponding visual experience (i.e., seeing). The studies reported here show that this assumption is incorrect. When observers view a sequence of displays alternating between an image of a scene and the same image changed in some way, they often feel (or sense) the change even though they have no visual experience of it. The (...) subjective difference between sensing and seeing is mirrored in several behavioral differences, suggesting that these are two distinct modes of conscious visual perception. (shrink)
Scene Perception is the visual perception of an environment as viewed by an observer at any given time. It includes not only the perception of individual objects, but also such things as their relative locations, and expectations about what other kinds of objects might be encountered. Given that scene perception is so effortless for most observers, it might be thought of as something easy to understand. However, the amount of effort required by a process often bears little relation to its (...) underlying complexity. A closer look shows that scene perception is a highly complex activity, and that any account of it must deal with several difficult issues: What exactly is a scene? What aspects of it do we represent? And what are the processes involved? Finding the answers to these questions has proven to be extraordinarily difficult. However, answers are being found, and a general understanding of scene perception is beginning to emerge. Interestingly, this emerging picture shows that much of our subjective experience as observers is highly misleading, at least in regards to the way that scene perception is carried out. In particular, the impression of a stable picture-like representation somewhere in our heads turns out to be largely an illusion. To see how this comes about, imagine a seashore where there is a sailboat, some rocks, some clouds, and perhaps a few other objects (see Figure 1). How do we perceive this scene? Intuitively, it seems that the set of objects in the environment would give rise to a corresponding set of representations in the observer. Thus, there would be detailed representations of the sailboat, clouds, etc., with each representation describing the identity, location, and 'meaning' of the item it refers to. In this view, the goal of scene perception is to form a literal re-presentation of the world, with all of its visible structure represented concurrently and in great detail everywhere. This representation then serves as the basis for all subsequent visual processing.. (shrink)
This paper discusses several key issues concerning consciousness and human vision. A brief overview is presented of recent developments in this area, including issues that have been resolved and issues that remain unsettled. Based on this, three Hilbert questions are proposed. These involve three related sets of issues: the kinds of visual experience that exist, the kinds of visual attention that exist, and the ways that these relate to each other.
It is argued here that cognitive science currently neglects an important source of insight into the human mind: the effects created by magicians. Over the centuries, magicians have learned how to perform acts that are perceived as defying the laws of nature, and that induce a strong sense of wonder. This article argues that the time has come to examine the scientific bases behind such phenomena, and to create a science of magic linked to relevant areas of cognitive science. Concrete (...) examples are taken from three areas of magic: the ability to control attention, to distort perception, and to influence choice. It is shown how such knowledge can help develop new tools and indicate new avenues of research into human perception and cognition. (shrink)
This chapter presents an overview of several recent developments in vision science, and outlines some of their implications for the management of visual attention in graphic displays. These include ways of sending attention to the right item at the right time, techniques to improve attentional efficiency, and possibilities for offloading some of the processing typically done by attention onto nonattentional mechanisms. In addition it is argued that such techniques not only allow more effective use to be made of visual attention, (...) but also open up new possibilities for human-machine interaction. (shrink)
We present a rigorous way to evaluate the visual perception of correlation in scatterplots, based on classical psychophysical methods originally developed for simple properties such as brightness. Although scatterplots are graphically complex, the quantity they convey is relatively simple. As such, it may be possible to assess the perception of correlation in a similar way.
Recent advances in our understanding of visual perception have shown it to be a far more complex and counterintuitive process than previously believed. Several important consequences follow from this. First, the design of an effective statistical graphics system is unlikely to succeed based on intuition alone; instead, it must rely on a more sophisticated, systematic approach. The basic elements of such an approach are outlined here, along with several design principles. An overview is then given of recent advances in our (...) understanding of visual perception, including rapid perception, visual attention, and scene perception. It then is argued that the mechanisms involved can be successfully harnessed to allow data to be displayed more effectively than at present. Several directions of development are discussed, including effective use of visual attention, the display of dynamic information, and the effective use of nonattentional and nonconscious perceptual systems. (shrink)
Information management systems improve the retention of information in large collections. As such they act as memory prostheses, implying an ideal basis in human memory models. Since humans process information by association, and situate it in the context of space and time, systems should maximize their effectiveness by mimicking these functions. Since human attentional capacity is limited, systems should scaffold cognitive efforts in a comprehensible manner. We propose the Principles of Mnemonic Associative Knowledge (P-MAK), which describes a framework for semantically (...) identifying, organizing, and retrieving information, and for encoding episodic events by time and stimuli. Inspired by prominent human memory models, we propose associative networks as a preferred representation. Networks are ideal for their parsimony, flexibility, and ease of inspection. Networks also possess topological properties—such as clusters, hubs, and the small world—that aid analysis and navigation in an information space. Our cognitive perspective addresses fundamental problems faced by information management systems, in particular the retrieval of related items and the representation of context. We present evidence from neuroscience and memory research in support of this approach, and discuss the implications of systems design within the constraints of P-MAK’s principles, using text documents as an illustrative semantic domain. (shrink)
Change blindness is the striking failure to see large changes that normally would be noticed easily. Over the past decade this phenomenon has greatly contributed to our understanding of attention, perception, and even consciousness. The surprising extent of change blindness explains its broad appeal, but its counterintuitive nature has also engendered confusions about the kinds of inferences that legitimately follow from it. Here we discuss the legitimate and the erroneous inferences that have been drawn, and offer a set of requirements (...) to help separate them. In doing so, we clarify the genuine contributions of change blindness research to our understanding of visual perception and awareness, and provide a glimpse of some ways in which change blindness might shape future research. (shrink)