Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.

Free full text 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Exp Psychol Gen. Author manuscript; available in PMC 2015 Jun 2.
Published in final edited form as:
PMCID: PMC4451378
NIHMSID: NIHMS674909
PMID: 25938178

THE DYNAMICS OF CATEGORIZATION: UNRAVELING RAPID CATEGORIZATION

Abstract

We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of object categorization, yet no study has previously investigated them together. Over five experiments, we systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. But this superordinate advantage was modulated significantly by target category trial context. With randomized target categories, the superordinate advantage was eliminated; and with “blocks” of only four repetitions of superordinate categorization within an otherwise randomized context, the advantage for the basic-level was eliminated. Contrary to some theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context.

A brief glance in the backyard reveals a flutter of activity at the birdfeeder. A solitary object is perched on the feeder, yet a collection of categories can come to mind: living object, animal, bird, American Robin. The ease with which these categories come to mind masks the complex processes mapping perceptual information onto stored representations of known categories. What category was available first? Did you see the animal before the bird or vice versa? When did you recognize it as an American Robin? Do certain categories have priority? Did you first need to see it as a bird and only then recognize what kind of bird it was? Or did you first need to see it as an animal before you could recognize what kind of animal it was? Or perhaps multiple levels of the categorization hierarchy were accessed in parallel?

The relative speed of categorization at different levels of abstraction has long been a fundamental experimental measure used to understand how objects are categorized and how semantic knowledge is organized and accessed (e.g., Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976; Smith, Shoben, & Rips, 1974; see Mack & Palmeri, 2011, for one recent review). The seminal work of Rosch and colleagues (Mervis & Rosch, 1981; Rosch et al., 1976) described the privileged status of the so-called basic level of the category hierarchy. The basic level is defined as object categories at an intermediate level of abstraction (e.g., bird, car, chair) that “carves nature at its joints”, with members of the same basic-level category sharing similar shape and function that are distinct from members of other basic-level categories. Basic-level categories typically show an advantage over categories more superordinate (e.g., animal, vehicle, furniture) or subordinate (e.g., American Robin, Toyota Camry, Windsor). For example, in speeded category verification tasks, basic-level categories are verified more quickly than subordinate and superordinate categories (Rosch et al., 1976). This speed advantage was later termed the entry level (Jolicoeur, Gluck, & Kosslyn, 1984), to reflect when perceptual information first makes contact with stored category knowledge.

The rich and varied literature investigating the relative speed of categorization at different levels of abstraction reflects its theoretical importance. The entry level of categorization is a consequence of the critical intersection of visual perception and semantic knowledge (Palmeri & Gauthier, 2004; Palmeri & Tarr, 2008; Richler & Palmeri, 2014). As a result, this literature has impacted our theoretical understanding of how perception makes contact with knowledge (e.g., Bowers & Jones, 2008; Joliceour et al., 1984; Mack & Palmeri, 2010a), how semantic knowledge is organized and accessed (e.g., Kruschke, 1992; Murphy & Brownell, 1985; Nosofsky, 1986; Rogers & Patterson, 2007; Smith et al., 1974), how visual perception and category knowledge change with development (e.g., Mandler, Bauer, & McDonough, 1991; Mandler & McDonough, 2000;), learning (e.g., Schyns, Goldstone, & Thibault, 1998; Scott, Tanaka, Sheinberg, & Curran, 2008; Wong, Palmeri, & Gauthier, 2009), and expertise (e.g., Johnson & Mervis, 1997; Palmeri, Wong, & Gauthier, 2004; Tanaka & Taylor, 1991), as well as the neural basis of visual perception (e.g., Gauthier & Palmeri, 2002; Sigala & Logothetis, 2002), object categorization (e.g., Freedman, Riesenhuber, Poggio, & Miller, 2001; Gauthier, Skudlarski, Gore, Anderson, 2000; Mack, Preston, & Love, 2013; Marsolek, 1999), and semantic knowledge (e.g., Carlson, Simmons, Kriegeskorte, & Slevc, 2013; Farah, 1990; Patterson, Nestor, & Rogers, 2007).

Finding whether categorization is faster at one level of abstraction than another has fueled theoretical debates about whether variation in the temporal dynamics of object categorization reflects discrete stages of object categorization (e.g., Grill-Spector & Kanwisher, 2005; Jolicoeur et al., 1984), differential accumulation of perceptual evidence over time (e.g., Bowers & Jones, 2008; Mack & Palmeri, 2010a; Palmeri et al., 2004), feedforward versus feedback mechanisms (e.g., Serre, Oliva, & Poggio, 2007), and how quickly diagnostic perceptual information might become available (e.g., Lamberts, 2000; Oliva & Schyns, 1997). Understanding whether certain levels of abstraction are accessed more quickly than others offers an important constraint on models of neural information processing that formalize how temporally-dependent neural measures give rise to acts of cognition (e.g., Fabrè-Thorpe, 2011; Mack & Palmeri, 2011; Palmeri & Tarr, 2008). To understand the mechanisms mediating the relative speed of categorization at different levels of abstraction is to better understand the interaction of core processes of perception and cognition.

While for most common objects, the entry level is the basic level, it has long been known that the privileged level of the category hierarchy is not fixed. Development, knowledge, and experience can influence the entry level. Atypical objects often show privileged access at the subordinate level rather than the basic level (Jolicoeur et al., 1984; Murphy & Brownell, 1985). Extensive experience can change the relative speed of categorization, such that experts categorize objects within their domain of expertise just as quickly at the basic and more subordinate levels (Johnson & Mervis, 1997; Tanaka & Taylor, 1991). The privileged access to basic-level categories emerges in children during early development (Rosch et al., 1976), with young children showing a relative advantage for categories at more superordinate levels until around 18 months of age (Mandler et al., 1991). In addition, the entry level of categorization may be vulnerable to certain neuropsychological impairments, such as semantic dementia whereby patients lose access to basic-level categories and show a systemic preference for more general levels of abstraction (Hodges, Graham, & Patterson, 1995). Collectively, although these findings suggest that the relative speed of categorization is malleable depending on development, expertise, and impairment, it is commonly found that traditional Roschian basic-level categories do in general entertain a privileged status in a healthy adult categorization system.

The basic-level advantage found using a category verification task contrasts markedly with a widely-reproduced superordinate-level advantage found using an ultra-rapid categorization task (Bacon-Macé, Kirchner, Fabre-Thorpe, & Thorpe, 2007; Delorme, Rousselet, Macé, & Fabre-Thorpe, 2004; Fize, Fabre-Thorpe, Richard, Doyon, & Thorpe, 2005; Kirchner & Thorpe, 2006; Macé, Joubert, Nespoulous, & Fabre-Thorpe, 2009; Rousselet, Macé, & Fabre-Thorpe, 2003; Thorpe, Fize, & Marlot, 1996; VanRullen & Thorpe, 2001a, 2001b). As the name implies, in ultra-rapid categorization tasks, objects are exposed very briefly – often one or two screen refreshes on old CRT monitors (~30ms). Very fast and accurate responses to verifying categories at the superordinate level are observed (Fabre-Thorpe et al., 2001; Thorpe et al., 1996), leaving little room for an even faster basic-level categorization (VanRullen & Thorpe, 2001a). Indeed, Macé et al. (2009) observed faster superordinate than basic-level ultra-rapid categorization, and suggested that, “You spot the animal faster than the bird”. Findings from ultra-rapid categorization have been taken to suggest that perceptual information first makes contact not with basic-level categories, but with categories at more general, superordinate levels of the conceptual hierarchy.

How can we reconcile this apparent paradox? Certainly on the surface, findings from speeded category verification tasks and ultra-rapid categorization tasks seem to support contradictory accounts of the time course of categorization. Given how much theoretical work in perception, cognition, development, and neuroscience is grounded in understanding the time course of categorization at different levels of abstraction, it is important to reconcile and understand these markedly different empirical results. That is the focus of this article.

To begin to resolve this paradox, we need to deconstruct the methodological differences between speeded category verification and ultra-rapid categorization. In particular, we focus on the factors that modulate the qualitative results and bear most centrally on the theoretical mechanisms of visual object categorization. The most salient difference is stimulus exposure duration. The “ultra-rapid” in ultra-rapid categorization refers not only to the speed of the categorization decision, but also to the stimulus exposure: Images are displayed very briefly, between 8 and 30 ms depending on the particular experiment. By contrast, in speeded category verification, images are displayed for far longer, often until a response is made.

Why might variation in exposure duration fundamentally affect the speed of categorization at different levels of abstraction? For one, a common element in a range of theories is that internal object representations follow an evolving coarse-to-fine trajectory over time (e.g, Lamberts, 2000; Rogers & Patterson, 2007; Schyns & Oliva, 1994; Serre, Oliva, & Poggio, 2007), and coarse features alone might be sufficient for rapid superordinate categorization but not basic-level categorization. For another, the presence of a single salient feature, say an eye or a leg, might be sufficient to suggest the presence of an animal, a superordinate categorization, but insufficient to categorize efficiently at the basic level (e.g., Thorpe et al., 1996). Some have even suggested that an initial, rapid, feedforward sweep through the visual system activates superordinate categories in frontal areas first and that these representations then provide the perceptual system with top-down information for making more fine-grained categorizations at the basic or subordinate levels (Bar, 2004; Thorpe et al., 1996). According to that view, abstract superordinate levels of categorization are the first contact with conceptual knowledge.

Articles often highlight exposure duration as if it were the only critical experimental factor affecting whether a superordinate or basic-level advantage is observed, perhaps because it engenders a fairly straightforward theoretical explanation. But things are not so simple. A second factor that varies markedly between speeded category verification and ultrarapid categorization is the trial structure: target categories at different levels of abstraction are typically randomized in speeded category verification tasks but are typically blocked in ultra-rapid categorization tasks. Outside of areas where trial sequence effects are often the coin of the realm, like priming or cognitive control (e.g., Logan, Schneider, & Bundesen, 2007; McNamara, 2005; Pouget, Logan, Palmeri, Boucher, Paré, & Schall, 2011), trial and context effects are often either ignored or considered a nuisance (see Palmeri & Mack, 2015). But there is growing understanding that local trial context can carry important explanatory weight (e.g., Jones, Curran, Mozer, & Wilder, 2013; Stewart, Brown, & Chater, 2005). It is not hard to imagine that category context could potentially impact the speed of categorization at different levels of abstraction. For example, object categorization often requires selective attention to diagnostic dimensions (Kruschke, 1992; Nosofsky, 1986), and different features may be diagnostic for superordinate versus basic-level categories (Palmeri, 1999), so switching to different levels of categorization from trial to trial requires new patterns of selective attention to be established. Also, a string of trials of the same kind of categorization, at the same level of abstraction, can produce priming, affecting whether some categorizations are faster than others. These are just two examples of how local trial context might influence the relative speed of categorization at different levels of abstraction in a systematic way.

Before further speculating on why category context might matter, we need to test whether it does matter. In fact, we show that both matter. In five experiments, we systematically explore these two experimental factors to understand and explain the competing findings in the literatures using classic speeded category verification task versus the ultra-rapid categorization task. In so doing, our findings offer a reconciliation of differing theoretical accounts of how categorization unfolds over time.

Experiment 1: Comparing Ultra-Rapid Categorization and Speeded Category Verification

Ultra-rapid categorization tasks use brief stimulus exposure with target categories that are blocked by level of abstraction. Speeded category verification tasks use long stimulus exposure with target categories that are randomized by level of abstraction. Experiment 1 filled in the missing cells of this factorial design. In a between-subjects design, exposure was either brief (25ms) or long (250ms) and target categories were either blocked or randomized. In all conditions, the same collection of objects were categorized at superordinate, basic, or subordinate levels.

There are important reasons to consider both of these factors beyond simply filling in some missing experimental conditions. Brief exposure to sensory information limits perceptual processing in ways that could bias representations towards superordinate categories over basic-level categories (e.g., Fabrè-Thorpe, 2011; Rogers & Patterson, 2007). If the superordinate advantage observed during ultra-rapid categorization were caused by brief exposure alone, then a superordinate advantage would be expected regardless of target category context. However, when target category context is blocked, creating an experimental context that singly focuses on either superordinate or basic-level categories over a long series of trials, there could emerge systematic differences in processing (e.g., response criterion shifts, tuning of perceptual strategies, or trial-to-trial priming) across levels of abstraction. If the superordinate advantage observed during ultra-rapid categorization is caused by blocked target category context, then that advantage would be expected regardless of whether the exposure was brief or long.

To foreshadow our results, we observed that a conjunction of both brief exposure and blocked target category context are necessary to observe a superordinate-level advantage. The remaining experiments further explore this finding.

Methods

Participants

Fifty-six Vanderbilt University undergraduate students (35 female, age range 18-23, average age 19.2 years) with normal or corrected-to-normal vision participated in this experiment. Participants received course credit for their participation. Informed consent was obtained prior to participation in accordance with Vanderbilt University's Institutional Review Board.

Stimuli

Stimuli for Experiment 1 consisted of images of dogs, birds, flowers, and trees. Dog stimuli consisted of images of the eight most popular dog breeds for 2010 according to the American Kennel Club (http://www.akc.org/): Beagle, Boxer, Bulldog, Dachshund, German Shepherd, Golden Retriever, Labrador Retriever, Yorkshire Terrier. Bird stimuli consisted of images of the eight most frequently photographed “backyard birds” according to the Cornell Lab of Ornithology (http://www.allaboutbirds.org): Blue Jay, Northern Cardinal, Crow, Coopers Hawk, Oriole, Rock Pigeon, American Robin, Tree Sparrow. Participant familiarity with the dog breeds and bird species were confirmed before the experiment. Flower images consisted of thirty-two close-up views of fully bloomed flowers of many varied flower species. Tree images consisted of thirty-two images showing entire trees from many tree species with minimal scene background. Tree stimuli included conifers and deciduous trees, both with and without leaves. Flower and tree stimuli were only tested on superordinate categorizations trials; flowers and trees were not included on basic-level and subordinate categorization trials. Stimulus images were collected from various online sources. To reduce the influence of scene context on object categorization (e.g., Bar, 2004), the stimulus images were selected and cropped so that pictured objects were prominent and the background scene context was limited though not eliminated. No stimulus image was repeated during an experimental session. Example stimulus images are shown in Figure 1. Participants sat approximately 60 cm from the experiment monitor and stimuli subtended no more than 13° × 13° of visual angle.

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0001.jpg

Example stimuli used in all experiments (by row: dogs, birds, animals, plants, means of transportation). Images were full color and cropped to minimize the amount of background scene context.

Procedure

The two factors (Exposure Duration: 25 or 250ms; Target Context: blocked or randomized) were fully crossed to create four between-subject experimental conditions (25ms duration/blocked, 250ms duration/blocked, 25ms duration/randomized, and 250ms duration/randomized). Participants were randomly assigned to one of these four conditions. A trial consisted of an initial fixation cross presented for 800ms, followed by a superordinate (“animal” or “plant”), basic (“bird” or “dog”), or subordinate (breed and species names listed earlier) category label presented for 1000ms, followed by a stimulus image presented for 25 or 250ms depending on the exposure duration condition. Participants were instructed to respond whether the pictured object matched the category label (yes) or not (no). Participants made responses on a keyboard by pushing the “1” key for a yes response and “2” key for a no response using their index and middle fingers on their right hands. Responses could be made up to 1250ms after the onset of the stimulus image. Trials were evenly divided between “yes” and “no” trials. False trials consisted of objects from the same level of the conceptual hierarchy as the category label (e.g., “animal” with an image of a flower, “bird” with an image of a dog, and “Boxer” with an image of a Dachshund).

Trials were presented in 36 trial blocks. The blocked target category condition presented category labels from the same level of abstraction in separate blocks. The order of the blocks was randomized across participants. The randomized target category condition presented category labels from different levels of abstraction randomly throughout the experiment.

All participants received the same instructions. These instructions did not highlight the factors that were manipulated; participants were not made aware of the target category context or the duration of the stimulus exposure. The entire experiment consisted of 12 practice trials and 216 experimental trials (72 trials in each of the 3 categorization types) and lasted approximately 35 minutes.

Results

The results from each of the four experiment conditions were analyzed separately and in the same manner. Average median response times for correct “yes” trials and sensitivity (d’) for superordinate, basic, and subordinate categorization in the four conditions is presented in Figure 2. For each participant, sensitivity was derived separately for each categorization level from hit and false alarm rates. Following Macé et al. (2009), superordinate performance was calculated from only those trials relevant for the animal category (e.g., correct trials of verifying an animal and false alarm trials incorrectly verifying plants as an animal). A one-way analysis of variance with category level (superordinate / basic / subordinate) as a within-subject factor was conducted on correct “yes” RT and sensitivity for each condition; we report here one-way anovas because we are particularly interested in the planned comparisons of superordinate versus basic-level categorization within each condition, the planned comparison always reported in single-condition experiments we modeled this multi-factor experiment after. A significant effect of category level was observed in all tests of response times (25ms exposure duration and blocked context: F1,13 = 4.89, MSE = 3483.2, p = 0.015, ηp2 = 0.259; 25ms exposure duration and randomized context: F1,13 = 17.97, MSE = 1256.6, p < 0.001, ηp2 = 0.431; 250ms exposure duration and blocked context: F1,13 = 9.86, MSE = 2313.4, p < 0.001, ηp2 = 0.581; 250ms exposure duration and randomized context: F1,13 = 11.54, MSE = 1177.4, p < 0.001, ηp2 = 0.471) and sensitivity (25ms exposure duration and blocked context: F1,13 = 32.19, MSE = 0.214, p < 0.001, ηp2 = 0.697; 25ms exposure duration and randomized context: F1,13 = 45.48, MSE = 0.149, p < 0.001, ηp2 = 0.778; 250ms exposure duration and blocked context: F1,13 = 20.32, MSE = 0.216, p < 0.001, ηp2 = 0.609; 250ms exposure duration and randomized context: F1,13 = 29.17, MSE = 0.205, p < 0.001, ηp2 = 0.692).

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0002.jpg

Median correct “yes” RT and sensitivity (d’) for superordinate, basic, and subordinate categorization for brief (25ms) and long (250ms) exposures for both blocked target (two left panels) and randomized target contexts (two right panels). Error bars show 95% confidence intervals based on the main effect of category level.

The nature of the main effects observed in each one-way ANOVA was examined using planned comparisons (paired t-tests) contrasting basic-level categorization with superordinate and subordinate categorization. An advantage for the basic level was observed in all conditions except the condition with brief exposure and blocked target context – the very condition corresponding to the design of ultra-rapid categorization tasks. For random target context with brief exposures, basic-level categorization was more accurate and faster than superordinate (d’: t13 = 4.28, p = 0.009, d = 0.976; RT: t13 = 8.07, p < 0.0001, d = 0.659) and subordinate (d’: t13 = 7.96, p < 0.0001, d = 0.608; RT: t13 = 4.23, p < 0.001, d = 2.21) categorization. Similarly, for random target context with long exposures, basic-level categorization was more accurate and faster than superordinate (d’: t13 = 3.07, p = 0.009, d = 1.11; RT: t13 = 5.18, p = 0.0002, d = 0.728) and subordinate (d’: t13 = 6.64, p < 0.0001, d = 2.31; RT: t13 = 4.43, p = 0.0007, d = 0.731) categorization. For blocked target context with long exposures, basic-level categorization was as accurate as but faster than superordinate categorization (d’: t13 = 0.023, p = 0.982, d = 0.01; RT: t13 = 3.18, p = 0.0072, d = 0.355) and more accurate and faster than subordinate categorization (d’: t13 = 5.67, p = 0.0001, d = 1.85; RT: t13 = 3.83, p = 0.002, d = 0.66). Importantly, for blocked target context with brief exposures, the conditions of ultra-rapid categorization tasks, while basic-level categorization was more accurate and faster than subordinate categorization (d’: t13 = 6.58, p < 0.0001, d = 2.21; RT: t13 = 2.86, p = 0.0127, d = 0.404), there were no differences in accuracy or response time between basic-level and superordinate categorization (d’: t13 = 0.707, p = 0.491, d = 0.11; RT: t13 = 0.167, p = 0.870, d = 0.021).

Discussion

Experiment 1 crossed the factors of exposure duration and target category context to investigate the critical factors that affect the relative speed of categorization at different levels of abstraction. Speeded category verification tasks typically use long exposures and randomized target category context, observing a classic basic-level advantage. Ultra-rapid categorization tasks use brief exposures and blocked target category context, observing a superordinate advantage. Is it exposure duration or target category context that matters? The answer seems to be both.

With long exposures, a basic-level advantage in response times, sensitivity, or both was observed regardless of whether the target category context was blocked or randomized. Only with brief exposures did target category context matter. With brief exposures and randomized target category, an advantage for the basic level was observed. But such an advantage at the basic level was absent with brief exposures and blocked target category, the very combination of conditions used in ultra-rapid categorization tasks.

In this experiment, we observed equivalent speed and accuracy for superordinate and basic-level categorization. While not a superordinate advantage per se, these results represent a clear departure from the classic basic-level advantage observed in the other three conditions and universally observed in all past speeded category verification tasks. In an additional experiment using the same stimuli with brief exposures and blocked target category, we did indeed observe a significant superordinate advantage during ultra-rapid categorization, with faster responses for superordinate (RT=422ms) than basic-level (RT=435ms) categorization (t29 = 2.519, p = 0.019), and more accurate responses for superordinate (d’=3.683) than basic-level (d’=3.361) categorization (t29 = 3.438, p = 0.002). Although that experiment used the same stimuli, trial structure, and category context, it happened to use significantly more test trials within each block (156 versus 36) compared to Experiment 1. These findings together provide clear evidence that category context, not just exposure duration, can influence the relative speed of categorization at different levels of abstraction.

Both exposure duration and target category context are important for eliminating the basic-level advantage. As noted earlier, it is often argued that the relative speed of ultra-rapid categorization is due to rapid stimulus exposure: Categorizing objects at a glance depends on a fast initial wave of feedforward processing that maps visual inputs onto superordinate category knowledge (Thorpe et al., 1996); longer exposure is required to encode more detailed perceptual features and permits top-down feedback modulation that leads to faster categorization at the basic level (Fabré-Thorpe, 2011; Macé et al., 2009; Rogers & Patterson, 2007; VanRullen & Koch, 2003; VanRullen & Thorpe, 2001a). The local context of categorization trials within an experiment, such as whether target categories are blocked or randomized, is not considered when explaining ultra-rapid categorization. Indeed, Macé et al. (2009) seemed to dismiss any effect of blocking by target category by suggesting that an effect of blocking should be seen equally across category levels, not just the superordinate level. The results from our experiment suggest otherwise. Brief exposures are critical to eliminating the basic-level advantage, but only when the experimental context singly focuses on categorizing at a particular level of abstraction. Indeed, if we observed that only exposure duration (limited perceptual encoding) mattered, as suggested by Macé et al. (2009), we would have had on our hands a single-experiment brief report, not this multi-experiment article.

The results from Experiment 1 demonstrate the critical role of both exposure duration and target context in ultra-rapid categorization. The following experiments explore in more detail how exposure duration and target category context affect the speed of categorization at different levels of abstraction. Experiments 2 and 3 examine further the effects of exposure duration when target categories are blocked. Experiments 4 and 5 examine further the effects of target category context when exposures are brief.

Experiment 2: The Time Course of Categorization at Basic and Superordinate Levels

Models of visual object recognition often describe perceptual processing stages that transform a high-dimensional retinal image into a relatively low-dimensional object representation (e.g., Dailey & Cottrell, 1999; Edelman, 1999; Palmeri & Tarr, 2008; Riesenhuber & Poggio, 2000). Forming this perceptual representation takes time, with some perceptual features available earlier than others (Lamberts, 2000; Oliva & Schyns, 1997). The prominent explanation for ultra-rapid categorization findings is that with very brief stimulus exposure, the limited perceptual information available for categorization supports superordinate categories relatively more so than basic-level categories (Macé et al., 2009). However, with longer exposure, an increasing number of perceptual features support an object's basic-level category, leading to a basic-level advantage (Rogers & Patterson, 2007).

At least for blocked category contexts, the effects of exposure duration in Experiment 1 are largely consistent with this explanation. The hypothesized time course (e.g., Rogers & Patterson, 2007) predicts that early in perceptual processing, the evidence for superordinate categories should be stronger than evidence for basic categories, but with more perceptual processing, these should reverse. Experiments 2 and 3 tested participants on both basic-level and superordinate categorization with fine-grained manipulations of stimulus exposure duration. Unlike Experiment 1, Experiments 2 and 3 used backwards masking (Breitmeyer & Ogmen, 2006) to obtain more experimental control over the time-course of perceptual processing (Figure 3).

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0003.jpg

Comparison of trial timing for Experiments 2 and 3. In Experiment 2, stimuli were presented for varying exposure durations and immediately followed by a dynamic mask. In Experiment 3, exposure duration was held constant (25ms) but the time between stimulus onset and mask onset varied.

While perceptual processing is not disrupted entirely by the onset of a mask (e.g., Rolls, Tovee, & Panzeri, 1999), many have argued that backward masking does control the amount of time available to extract and represent perceptual information. Here, masking was used in two ways, as illustrated in Figure 3. In Experiment 2, stimuli were presented for systematically varying exposure durations followed immediately by the onset of a dynamic mask. In Experiment 3, all stimuli were presented for the same fixed, brief exposure duration of 25ms, which is the same exposure duration used by most studies of ultra-rapid categorization. In this case, what was manipulated was the timing of the onset of a backward mask; the mask could appear at varying times from 0 to 100ms after the offset of the image. As such, Experiment 3 held image exposure duration constant, manipulating the amount of time available to perceptually process that brief image before being disrupted by the appearance of the mask. The critical question in Experiments 2 and 3 is not how quickly can participants make a response, but to what extent are perceptual processes able to extract information relevant for decisions given a precisely controlled amount of available processing time. As such, in Experiments 2 and 3, we focus our analyses on accuracy/sensitivity instead of response times.

Methods

Participants

Twenty-four Vanderbilt University undergraduate students (16 female, age range 18-22, average age 19.3 years) with normal or corrected-to-normal vision participated in this experiment. Participants received course credit for their participation. Informed consent was obtained prior to participation in accordance with Vanderbilt University's Institutional Review Board.

Stimuli

Stimuli consisted of images of dogs, animals, and means of transportation collected from various web sources. Animals included in the stimulus set consisted of a large variety of species (elk, deer, moose, elephant, tiger, hippopotamus, rhinoceros, bald eagle, mountain lion, bear, American robin, hummingbird, squirrel, rabbit, polar bear) with at least 6 images for all species. Similarly, images of means of transportation spanned many categories (including airplanes, jet planes, helicopters, sailboats, trains, bicycles, cruise ships, and motorcycles) with at least 6 images per category. Dog images included the set of the eight dog breeds from Experiment 1 plus additional images of the same eight breeds. Mask stimuli consisted of four frames of randomly generated images constructed from contrast-normalized, band-pass filtered white noise (Bacon-Macé et al., 2005). Each mask frame was presented for 17ms, making the total mask duration 68ms. Stimulus dimensions were the same as Experiment 1. No stimulus image was repeated in an experimental session.

Procedure

Participants performed a category verification task with a target category at a superordinate or basic level. Following the design of ultra-rapid categorization tasks, target category was blocked. At the beginning of each block of trials, participants were shown a label of a superordinate category (“animal”) or a basic-level category (“dog”) that served as the target category for all trials within that block. As in Macé et al. (2009), half of the “yes” animal trials showed a dog stimulus image to allow for direct comparison of basic-level and superordinate categorization behavior on the same basic-level category. A means-of-transportation image was randomly selected for each superordinate “no” trial. On every trial, a fixation cross was presented for 300-900ms followed by presentation of a stimulus image for a duration of 25, 33, 50, 75, 125, or 250ms immediately followed by a dynamic mask. Participants were instructed to respond “yes” if the object in the stimulus image belonged to the target category and “no” otherwise (half of all trials were “yes” trials, half were “no” trials). Participants had 1000ms from stimulus onset to make a response by pressing one of two labeled keys on a standard keyboard. A trial concluded with a 500ms blank screen before the next trial began. Participants completed three consecutive blocks of 104 trials with the superordinate category animal as a target and three consecutive blocks of 104 trials with the basic-level category dog as a target. Half of the participants performed the basic-level target blocks first and the other half performed the superordinate target blocks first. The order of the mask SOA was randomized throughout a block. The entire experiment consisted of 624 trials (52 trials in each of the 12 conditions) and lasted approximately 40 minutes.

Results

The order of superordinate versus basic-level categorization blocks produced similar patterns of performance, so the reported analyses and results were collapsed across order. Figure 4 presents the average sensitivity (d’) as a function of exposure duration for superordinate and basic-level categorization. To directly compare superordinate and basic-level categorization performance on the same category (“dog”), superordinate sensitivity was derived from the hit rate for “yes” superordinate trials with a dog stimulus image (half of the “yes” superordinate animal trials) and the false alarm rate from all of the “no” superordinate trials. A 2 × 6 analysis of variance was conducted with Category Level (superordinate versus basic) and Exposure Duration (25, 33, 50, 75, 125, or 250ms) as within-subject factors for both sensitivity and response time. Planned comparisons between superordinate and basic-level categorization were conducted with Wilcoxon sign rank tests with the null distribution estimated from 5000 Monte Carlo random resamples. Planned comparison results are reported with FDR corrected p values (Benjamin & Hochberg, 1995) as well as 95% confidence intervals of the estimated differences between categorization levels.

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0004.jpg

Sensitivity (d’) results for Experiment 2. Performance for superordinate and basic-level categorization is plotted as a function of stimulus exposure duration. Error bars represent 95% confidence intervals based on the interaction of category level (superordinate vs. basic) and exposure duration.

The analysis of variance showed that sensitivity increased with longer exposure durations as revealed by a significant main effect of Exposure Duration (F5,115 = 48.37, MSE = 0.256, p < 0.001, ηp2 = 0.678). There was no significant main effect of Category Level (F1,23 = 0.671, MSE = 0.349, p = 0.421, ηp2 = 0.028). However, there was a significant interaction of Category Level and Exposure Duration (F5,115 = 6.575, MSE = 0.186, p < 0.001, ηp2 = 0.222) such that with short exposure durations sensitivity was higher for a superordinate than basic-level categorization, but with longer exposure durations sensitivity was higher for basic-level than superordinate categorization. Planned comparisons of Category Level at the exposure durations revealed converging evidence of this crossover interaction with significant differences at 33ms (p = 0.0028, difference CI = [0.325, 0.814]) and 125ms (p = 0.049, difference CI = [−0.603, −0.036]) and a marginally significant difference at 250ms (p = 0.09, difference CI = [−0.482, 0.005]).

Analysis of correct “yes” response times (Table 1) showed that responses were generally somewhat faster with longer exposure duration (F5,115 = 13.07, MSE = 772.9, p < 0.001, ηp2 = 0.362), but were equivalent across category level (F1,23 = 1.911, MSE = 3623.1, p = 0.181, ηp2 = 0.077). The interaction of Category Level and Exposure Duration was not significant (F5,115 = 1.097, MSE = 517.9, p = 0.366, ηp2 = 0.046). Planned comparisons revealed no significant differences between superordinate and basic-level response times (ps > 0.1).

Table 1

Experiment 2 median “yes” response times (standard error of the mean in parentheses)

Exposure duration (ms)
Level25335075125250
basic488 (9)471 (12)461 (12)455 (9)460 (11)479 (11)
super.484 (14)456 (13)457 (13)434 (12)454 (12)470 (12)

Discussion

We systematically varied image exposure duration, followed immediately by a mask, to open a window on the evolving temporal dynamics of perceptual encoding during perceptual categorization at superordinate and basic levels of abstraction. Consistent with typical ultra-rapid categorization findings (Fabrè-Thorpe, 2011; Mack & Palmeri, 2011), replicated in our Experiment 1, a superordinate-level advantage was observed at short exposure durations. Sensitivity (d’) to categorize as an animal was higher than sensitivity to categorize as a dog. This suggests that the information available early in the time course of perceptual encoding – perceptual information that is able to survive the immediate onset of a backward mask – supports superordinate categorization – at least during blocked target category presentations. With longer exposures, the advantage for superordinate over basic-level categorization was eliminated and a basic-level advantage emerged. This crossover interaction in sensitivity with exposure duration is consistent with a predicted time course of perceptual processing hypothesized by Rogers and Patterson (2007).

Experiment 3: Perceptual Categorization at a Glance

Ultra-rapid categorization is often described as a window on the early stages of visual processing (Bacon-Macé et al., 2007; Fabre-Thorpe et al., 2001; Macé et al., 2009; Thorpe et al., 1996). Experiment 3 used ultra-rapid exposure durations of 25ms for every image, varying the onset time of the backward mask to systematically map out the time available to perceptually process that brief image. Whereas Experiment 2 systematically manipulated exposure duration followed immediately by a backward mask, which varies the amount of time available to extract visual information from an image, Experiment 3 held exposure duration constant and systematically manipulated the onset time of the backward mask, which varies the amount of time available to interpret and use that visual information to make a perceptual decision.

Methods

Participants

Fourteen Vanderbilt University undergraduate students (8 female, age range 18-24, average age 19.5 years) with normal or corrected-to-normal vision participated in this experiment. Participants received course credit for their participation. Informed consent was obtained prior to participation in accordance with Vanderbilt University's Institutional Review Board.

Stimuli

The same stimuli of dogs, animals, means of transportation and the mask stimuli from Experiment 2 were used in Experiment 3. No stimulus image was repeated in an experimental session.

Procedure

Experiment 3 followed the same procedures as Experiment 2 with the following exceptions: Stimulus images were always presented for 25ms followed by a dynamic mask at a SOA of 25, 33, 50 75, or 125ms (measured from the onset of the stimulus image). Participants completed six blocks: three consecutive blocks of 100 trials with the superordinate category animal as a target and three consecutive blocks of 100 trials with the basic-level category dog as a target. This resulted in 10 experimental conditions consisting of the five mask SOAs crossed with two categorization types. Half of the participants performed the basic-level category target blocks first and the other half performed the superordinate target blocks first. Trials were evenly split between “yes” and “no” trials. The order of the mask SOA was randomized throughout a block. The entire experiment consisted of 600 trials (60 trials in each of the 10 experimental conditions) and lasted approximately 40 minutes.

Results

As in the previous experiment, results were comparable for both orderings of target category context, so data were collapsed across order. Figure 5 displays the average sensitivity (d’) as a function of mask SOA for superordinate and basic categorization. Also similar to the previous experiment, superordinate performance was based on the “yes” superordinate trials that included a dog stimulus image. A 2 × 5 analysis of variance was conducted with Category Level (superordinate versus basic-level categorization) and Mask SOA (25, 33, 50, 75, 125ms) as within-subjects factors for both sensitivity (d’) and response time for correct “yes” responses. Planned comparisons between superordinate and basic-level categorization were conducted with Wilcoxon sign rank tests as described in Experiment 2.

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0005.jpg

Sensitivity (d’) results for Experiment 3. Performance for superordinate and basic-level categorization is plotted as a function of Mask SOA. Error bars represent 95% confidence intervals based on the interaction category level (superordinate vs. basic) and Mask SOA. Dotted lines curves show mask SOA-accuracy tradeoff functions fit to average superordinate and basic sensitivities (as described in the main text).

Sensitivity increased with longer mask SOA (F4,52 = 23.8, MSE = 0.315, p < 0.001, ηp2 = 0.647) and was higher for superordinate than basic-level categorization (F1,13 = 32.2, MSE = .448, p < 0.001, ηp2 = 0.712). Critically, there was a significant interaction (F4,52 = 2.78, MSE = 0.161, p = 0.036, ηp2 = 0.176) such that the difference in sensitivity between superordinate and basic at shorter mask SOAs was larger than at longer mask SOAs. Planned comparisons comparing sensitivity for superordinate and basic-level categorization at each mask SOA showed a significant difference at all SOAs (33ms: p = 0.005, CI difference = [.669, 1.468]; 50ms: p = 0.032, CI difference = [.081, 1.214]; 75ms: p = 0.007, CI difference = [.299, .961]; 125ms: p = 0.005, CI difference = [.154, .812]) except the 25ms SOA, which had a marginally significant difference (p = 0.069, CI difference = [−.017, .939]).

An ANOVA for median correct “yes” response times (Table 2) showed a main effect of Mask SOA (F4,52 = 6.96, MSE = 808.1, p < 0.001, ηp2 = 0.349) such that correct response times were shorter with longer SOAs. Neither the main effect of Category Level (F1,13 = 2.45, MSE = 6282.6, p = 0.142, ηp2 = 0.159) nor the interaction (F4,52 = 0.384, MSE = 500.7, p = 0.819, ηp2 = 0.029) were significant. Similarly, planned comparisons revealed no significant differences between superordinate and basic conditions at the different mask SOAs (ps > 0.15).

Table 2

Experiment 3 correct “yes” response times (standard error of the mean in parentheses)

Mask SOA (ms)
Level25335075125
basic474 (21)434 (17)446 (17)436 (19)441 (12)
super.446 (26)414 (21)422 (20)418 (22)426 (20)

To quantitatively characterize how performance changed as a function of Mask SOA, the sensitivity values from individual participants were fitted with an exponential function (Wicklegren & Corbett, 1977),

dλ(1 − eβ(tδ)), 

where t is the Mask SOA, λ is the asymptote, β is the growth, and δ is the onset. The asymptote represents an expected maximum sensitivity for a task given an unmasked presentation; the growth rate represents the rate at which relevant information is extracted; and the onset represents when performance begins to grow above chance during the time course of processing. By fitting this function to each participant's data, we can statistically compare the resulting parameter values (λ, β, and δ) for superordinate and basic-level categorization.

After fitting the exponential function to each individual participant's sensitivity data, we conducted planned comparison of parameter differences across the superordinate and basic – level categorization performance. Fits of the exponential function to average sensitivity are shown in Figure 5 and average parameter values are shown in Table 3. Confirming the ANOVA results above, superordinate sensitivity had a higher asymptote (W = 94, p = 0.01, difference CI = [.228, .789]) and a higher growth rate (W = 96, p = 0.007, difference CI = [.145, 3.450]) than basic sensitivity, but onsets did not differ (W = 54, p = 0.949, difference CI = [−12.484, 12.234]).

Table 3

Mean exponential function parameters (standard error of the mean in parentheses) and mean summed square error of the SAT fits

Level λ β δ SSE
basic3.07 (0.26)2.59 (0.56)21.43 (2.52)0.372
super.2.58 (0.21)0.60 (0.37)21.46 (2.49)0.585

Discussion

In Experiment 3, we used a single brief exposure duration with a dynamic mask presented at systematically varying time points after exposure in order to map out the time course of perceptual processing that can support categorization at superordinate and basic-levels. The extent to which categorization is resilient to the onset of the mask reveals, in some sense, how much category-relevant information is available at that time point.

The longest mask SOA represents the condition most similar to a typical ultra-rapid categorization experiment since most ultra-rapid categorization experiments use no mask at all. A superordinate-level advantage was observed. At the shortest SOA, with 25ms exposure to the stimulus image followed by an immediate mask, a marginally significant superordinate-level advantage was observed. While the immediate mask affected both superordinate and basic-level categorization, marginally more information diagnostic for the superordinate category survived the onset of mask. This difference reached significance throughout the remaining time window of mask SOAs. Interestingly, performance for superordinate categorization reached its asymptote by the 33ms SOA, whereas performance for basic categorization increased at a slower rate as revealed by the significant interaction in sensitivity and by the significantly lower growth rate in the fitted exponential functions.

These results suggest that with brief exposures, the information relevant for categorization decisions favors an object's superordinate over basic category. Mapping out the time course of perceptual processing reveals that this information is available quickly. With only a glance at an object, the encoded perceptual representation quickly supports the object's superordinate category (Bacon-Macé et al., 2005). The growth in performance for basic-level categorization was slower relative to superordinate categorization as would be expected from a longer encoding process for basic-level category information, as hypothesized, for example, by Rogers and Patterson (2007). Also, the overall better performance for superordinate than basic categorization suggests a difference in the quality of the perceptual evidence that can be extracted with brief image exposures.

Experiment 4: Randomized Target Category Context During Ultra-Rapid Categorization

In the earliest ultra-rapid categorization experiments, the only target category tested was animal (e.g., Thorpe et al., 1996). In later experiments, when more than one target category were tested, the target category context was always blocked (e.g., Macé et al., 2009; Rousselet et al., 2003; VanRullen & Thorpe, 2001a). The stated intent of these designs was to create what has been characterized as an “open loop”: Participants are perfectly aware of the target category, the appropriate attentional weights are established to extract the most diagnostic perceptual evidence, and decision criteria are optimized for the fastest responses. To further optimize performance, ultra-rapid categorization experiments often include a relatively large number of practice trials that are not included in the data analysis. In this way, the ultra-rapid categorization task has been designed to capture the fastest categorization decisions possible (Thorpe et al., 1996; VanRullen & Koch, 2003; VanRullen & Thorpe, 2002).

By contrast, in a classic speeded category verification experiment, stimulus exposure is unlimited, target categories at different levels of abstraction are randomized throughout, and there are few, if any, practice trials (e.g., Joliceour et al., 1984; Murphy & Brownell, 1985; Rosch et al., 1976; Tanaka & Taylor, 1991).

Aside from the obvious differences in exposure duration, do these other procedural differences matter? The interim answer seems to be yes. In Experiment 1, the basic-level advantage was eliminated only when exposures were brief and target categories were blocked. The next two experiments further explore the effects of randomizing or blocking target category context during ultra-rapid categorization and attempt to understand why and when it matters. Experiment 4 replicates and extends the effect from target context observed in Experiment 1 using a within-subject manipulation of target context. Experiment 5 examined the effect of very localized target category contexts within an otherwise randomized context.

Methods

Sixteen Vanderbilt University undergraduates (10 female, age range 18-23, average age 19.8 years) participated in the experiment in exchange for course credit. Experiment 4 followed the same procedures as Experiment 1 with the following exceptions: First, all stimuli were presented for brief exposure durations (25ms). Second, half of the experiment used blocked target category context and the other half used randomized target category context. As in Experiment 1, participants were tested on superordinate (animal vs. plant), basic (dog vs. bird), and subordinate (the eight dog breeds and bird species listed in Experiment 1) categorization. Also as in Experiment 1, the target category label was presented at the start every trial, regardless of whether target context was blocked or randomized. The order of the target category context conditions was counterbalanced across participants. Stimuli were randomly assigned to the blocked and randomized target category context conditions across participants, and no stimulus was repeated during the experiment. Between the two halves of the experiment, participants completed an unrelated filler task (a different experiment for a different project) that lasted about 30 minutes.

Results

Figure 6 presents the median correct “yes” response times for the blocked (left) and randomized (right) target context; see Table 4 for average sensitivity (d’) measures. Results were equivalent for both orderings of target context, so data was collapsed across orders. A 2 × 3 analysis of variance was conducted with Target Context (blocked vs. randomized) and Category Level (superordinate vs. basic vs. subordinate) as within-subject factors for both response time and sensitivity. A basic-level advantage in response time was observed in the randomized target context, but no differences across category level were observed in the blocked target context. Overall, both target context (F1,15 = 8.173, MSE = 4528.8, p = 0.012, ηp2 = 0.230) and category level (F1,15 = 7.747, MSE = 1575.2, p = 0.0019, ηp2 = 0.192) had a significant effect on response times. Critically, the interaction between target context and category level was significant (F1,15 = 6.161, MSE = 1501.2, p = 0.0057, ηp2 = 0.223). Planned comparisons across target context showed that response times were faster with a blocked target context for both superordinate (t15 = 2.62, p = 0.01, d = 0.620) and subordinate (t15 = 2.602, p = 0.021, d = 0.830) categorization. Response times for basic categorization were equivalent across target context (t15 = 0.076, p = 0.941).

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0006.jpg

Average correct “yes” response times for superordinate, basic, and subordinate categorization in the blocked target context (left plot) and randomized target context (right plot) conditions. Error bars represent 95% confidence intervals based the interaction of categorization level and target context.

Table 4

Mean sensitivity (d’) for Experiment 4 (standard error of the mean in parentheses)

Target Context

Levelblockedrandomized
superordinate2.74 (0.17)2.63 (0.17)
basic2.89 (0.18)2.68 (0.16)
subordinate1.93 (0.19)1.74 (0.12)

Sensitivity (see Table 5) was equivalent across the target context (F1,15 = 2.749, MSE = 0.246, p = 0.118, ηp2 = 0.155), but did differ depending on category level (F1,15 = 35.59, MSE = 0.244, p < 0.0001, ηp2 = 0.704) such that sensitivity was lower for subordinate categorization compared to superordinate and basic-level categorization. Category level and target context did not interact (F1,15 = 0.115, MSE = 0.198, p = 0.892, ηp2 = 0.001). Planned comparisons across target context showed no differences (ts15 < 1.2, ps > 0.25).

Table 5

Mean priming effect in sensitivity (standard error of the mean in parentheses) for Experiment 5

priming effect (Δd’)
probe trial categorizationBaselinematch superordinatenon-match superordinatematch basicnon-match basic
superordinate2.921 (0.45) 0.361 (0.09) −0.095 (0.15) 0.354 (0.08) 0.088 (0.13)
basic2.848 (0.58)0.032 (0.16)0.058 (0.15)−0.033 (0.17)0.101 (0.15)

Note: Values in italics indicate significant priming effects (Δd’ > 0, p < 0.05).

Discussion

In a within-subject design, this experiment demonstrated the important role of target category context on the relative speed of categorization decisions at different levels of abstraction. With a randomized target category context, typical for a standard category verification paradigm, a classic basic-level advantage was observed. This was eliminated when target categories context was blocked. This replicates and extends our results from Experiment 1.

We manipulated target context as a within-subject factor, allowing us to directly compare the effect of target context on the speed of categorization at different levels of abstraction without the potential uninteresting confound of between-group differences in categorization speed. The critical question was whether blocked target context condition causes faster superordinate categorization, slower basic categorization, or a combination of both. Our results suggest that blocked target context leads to faster, more efficient categorization at the superordinate level but little or no change at the basic level. One possibility is that increased efficiency of processing at the superordinate level is due to a shift in how relevant information is retrieved. In the blocked target context, recently viewed instances of the relevant superordinate category may be directly retrieved (Logan, 1988; Palmeri, 1997). This direct retrieval is likely faster than the default process of superordinate category representation retrieval potentially at play in the random target context, whereby superordinate category representations are indirectly activated by the retrieval of basic-level category representations that, in turn, activate semantically-related superordinate category representations (Jolicoeur et al., 1984). In contrast, perhaps basic-level categorization is relatively immune to contextual manipulations because of its automatic nature (Richler et al., 2011).

Somewhat unexpectedly, singly focusing on categorizing at one level of abstraction also improved processing of subordinate categories. Subordinate categorizations were significant faster with blocked target category context as well. Typically, subordinate categorization is a relatively slow, more effortful process that requires extracting more detailed perceptual features (Jolicoeur et al., 1984; Mack et al., 2009; Murphy & Smith, 1982; Tanaka & Taylor, 1991). In a previous study employing a signal-to-respond category verification paradigm (Mack et al., 2009), a basic-level advantage relative to subordinate categorization was consistently observed across a range of processing times (~400-1000ms). To our knowledge, subordinate categorization has not been examined previously using an ultra-rapid categorization paradigm. In an analogous fashion to the increasing efficiency seen in superordinate categorization, blocked target context may have given participants an opportunity to discover and focus on diagnostic features required for subordinate categorization more efficiently.

We note that the order of the target context conditions – whether participants performed the blocked or randomized target context first – had no effect on categorization performance. This suggests perhaps a fairly local window for the differences in target context to emerge. Blocked target context may lead to faster superordinate categorization, but this speed advantage is eliminated once the target context is randomized. Experiment 5 systematically investigated the consequences of local blocks of target category context on superordinate and basic-level categorization within an otherwise randomized context.

Experiment 5: Effects of Local Target Category Context

Repeating the same superordinate categorization over many trials, as true for ultra-rapid categorization tasks, eliminates the classic basic-level advantage. Significantly limiting exposure duration and focusing on a single level of abstraction over a long block of trials causes significantly faster superordinate categorization. How does the relative speed of categorization change according to the local target category context?

Experiment 5 investigated effects of local differences in category context on superordinate and basic-level categorization. To the participant, the experiment appeared just like a categorization task with ultra-rapid exposure and with randomized target category context, mirroring the conditions used in other experiments in this article. However, the seemingly random sequence of trials actually contained pre-specified pairs of trials in which they categorized objects consecutively at either the same level or at different level of abstraction. We call the first the “prime trial” and the second the “probe trial”. Would categorization on the probe trial be facilitated or inhibited by the very local category context introduced by the preceding prime trial? For example, is categorizing an object as a dog faster or slower after just categorizing another object as either a dog or as an animal? Is categorizing an object as an animal faster or slower depending on whether the previous image was categorized as an animal as well? Furthermore, is categorizing an object as a dog (or animal) faster or slower after categorizing a short sequence of more than one object as a dog (or animal)?

Methods

Participants

Twenty Vanderbilt University undergraduate students (13 female, age range 18-22, average age 19.5 years) with normal or corrected-to-normal vision participated in this experiment. Participants received course credit for their participation. Informed consent was obtained prior to participation in accordance with Vanderbilt University's Institutional Review Board.

Stimuli

The same stimuli of dogs, animals, and means of transportation from Experiment 2 were used in Experiment 5 along with the bird stimuli from Experiment 1. No stimulus image was repeated in an experimental session.

Procedure

Experiment 5 consisted of a structured sequence of trials of superordinate and basic-level category verification. Each individual trial began with a fixation cross, followed by a superordinate or basic category label for 1000ms, then an unmasked stimulus image for 25ms. As in other experiments, responses as to whether the pictured object matched the category label could be made up to 1000ms after the stimulus onset.

What was novel about Experiment 5 was that trials were paired to create critical pairs, baseline pairs, and filler pairs. Critical pairs were one of four types defined by the target category of the prime and the target category of the probe trial: There could be pairs where target categories were at the same level of abstraction (superordinate-superordinate or basic-basic) and pairs where target categories were at different levels of abstraction (superordinate-basic or basic-superordinate).

Baseline pairs consisted of a superordinate and basic-level prime preceded by a trial requiring a completely unrelated parity judgment. A random number between 1 and 8 was presented and participants simply had to say whether than number was odd or even.

Filler pairs consisted of a prime trial of a superordinate categorization with a target category of means of transportation preceded by the unrelated number parity task, a superordinate categorization as an animal or means of transportation, or a basic-level categorization as a bird or a dog.

The experiment included an equal number of trials for all target categories (animal, dog, bird, and means of transportation); half of the trials were match trials (where the correct answer was yes) and half were non-match trials (where the correct answer was no). This trial structure fully crossed two factors: the categorization level of the prime trial (Prime Level - superordinate or basic) and the correct response to the prime trial (Prime Match – “yes”/match or “no”/non-match). Thus, the experiment had four priming types: superordinate match, superordinate non-match, basic match, and basic non-match. Trial pairs representing these four priming types, along with baseline trial pairs and filler trial pairs, were randomly ordered. Although trials were structured into pairs, from the participants’ perspective, they were experiencing a completely random sequence of categorization trials at the superordinate or basic-level (along with trials with parity judgments). The entire experiment consisted of 12 practice trials and 460 experimental trials (40 trial pairs in the four priming types, 20 trial pairs in each of the superordinate and basic-level baseline conditions, and 40 trials pairs in the filler condition) and lasted approximately 40 minutes.

Results

Averaged across all trials irrespective of local context, Experiment 5 revealed a classic basic-level advantage such that correct “yes” response times were significantly faster (p < 0.0001) for basic-level categorization (M = 495ms) than superordinate categorization (M = 548ms). Performance for the baseline conditions (i.e., categorization trials that followed a number parity trial) showed a similar basic-level advantage (superordinate: 562ms, basic: 494ms, p < 0.0001). For sensitivity (d’), superordinate and basic-level performance was equivalently high overall (superordinate: 3.415, basic: 3.298, p = 0.08) and the same was true for the baseline condition (superordinate: 2.921, basic: 2.848, p = 0.518). These are median RT and average d’ across all trials. What is important for this experiment is breaking down the results according to trial pairs.

The critical comparisons involved the priming types and the baseline condition. We calculated a “priming effect” for categorization relative to baseline trials. The priming effect in response time (ΔRT) was calculated by subtracting the median correct “yes” response times for in each of the four priming conditions from the median correct “yes” response time for the baseline conditions. The priming effect in sensitivity (Δd’) was calculated by subtracting the baseline sensitivity from the priming type sensitivity. With these particular subtractions, a positive value for ΔRT or Δd’ indicates facilitation due to the prime and a negative value indicates inhibition due to the prime. Since stimulus presentation was not masked, we expected that facilitation or inhibition due to prime would be best observed in response times. The priming effects on superordinate and basic-level categorization probe trials were calculated for the four priming types: match superordinate, non-match superordinate, match basic, and non-match basic.

The priming effects on superordinate and basic-level categorization probe trial response times were evaluated with a 2 × 2 × 2 analysis of variance with Categorization Level (superordinate vs. basic), Prime Level (superordinate vs. basic), and Prime Match (match vs. non-match) as within-subject factors. This omnibus test showed a significant interaction of Categorization Level and Prime Level (F1,19 = 9.98, MSE = 9718.8, p = 0.0052, ηp2 = 0.344) and a marginally significant interaction of Categorization Level and Prime Match (F1,19 = 4.01, MSE = 13231.4, p = 0.059, ηp2 = 0.174). To aid in interpreting the differences of the priming conditions on superordinate and basic-level categorization, we performed 2 × 2 analyses of variance with Prime Level (superordinate vs. basic) and Prime Match (match vs. non-match) as within-subject factors for superordinate and basic-level categorization separately.

Priming effects in response times are shown in Figure 7. For superordinate probe trials, there was a significant main effect of Prime Level (F1,19 = 17.42, MSE = 1010.8, p = 0.0005, ηp2 = 0.478) such that a superordinate prime led to faster probe categorization at a superordinate level compared to a basic-level prime. The main effect of Prime Match was also significant (F1,19 = 19.52, MSE = 4161.2, p = 0.0003, ηp2 = 0.507) such that a match prime led to faster probe categorization at a superordinate level responses than a non-match prime. No interaction of Prime Level and Prime Match was observed (F1,19 = 1.01, MSE = 1808.1, p = 0.329, ηp2 = 0.050). One-sample t-tests were conducted to assess whether each prime condition showed a significant priming effect (|ΔRT| > 0). Both the match superordinate (t19 = 3.84, p = 0.0011, d = 0.882) and match basic-level prime (t19 = 2.97, p = 0.0078, d = 0.682) led to a significant positive priming effect, a non-match superordinate prime had no significant effect (t19 = 0.24, p = 0.813, d = 0.055), and a non-match basic-level prime led to a significant negative priming effect (t19 = 3.07, p = 0.006, d = 0.711).

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0007.jpg

Priming-probe results from Experiment 5. The legend for the prime condition (i.e., the type of categorization in the preceding trial) is illustrated in the left panel (“yes” superordinate categorization; “no” superordinate categorization; “yes” basic categorization; “no” basic categorization). The right panel plots the priming effect (correct “yes” prime trial RT subtracted from the correct “yes” baseline trial RT) for the four different priming conditions on probe trials of superordinate (left bars) and basic (right bars) categorization. Error bars represent 95% confidence intervals of the interaction error term from the two-way ANOVAs of Prime Level (superordinate vs. basic) and Prime Match (match vs. non-match).

For basic-level probe trials, there was a significant main effect of Prime Match (F1,19 = 4.76, MSE = 3140.1, p = 0.042, ηp2 = 0.201) such that match primes led to faster probe categorization at a basic-level than non-match primes; a prime trial requiring a “yes” response somewhat facilitated a subsequent probe trial requiring a “yes” response regardless of the category level of the probe. But neither the main effect of Prime Level (F1,19 = 0.035, MSE = 1303.0, p = 0.855, ηp2 = 0.002) nor the interaction (F1,19 = 0.889, MSE = 3052.4, p = 0.358, ηp2 = 0.045) were significant. Planned comparisons revealed no significant priming effects (ts19 < 1.5, ps > 0.15).

ANOVAs were also conducted on the priming effects in terms of sensitivity (d’) (see Table 5). For superordinate probe trials, there was a significant main effect of Prime Match (F1,19 = 8.22, MSE = 0.317, p = 0.01, ηp2 = 0.302) such that match primes led to higher sensitivity in probe trials than non-match primes. Neither the main effect of Prime Level (F1,19 = 0.035, MSE = 1303.0, p = 0.855, ηp2 = 0.062) nor the interaction (F1,19 = 0.889, MSE = 3052.4, p = 0.358, ηp2 = 0.078) reached significance. Both the match superordinate (t19 = 4.03, p = 0.0007, d = 0.924) and match basic prime (t19 = 4.76, p = 0.0001, d = 1.09) led to a significant positive priming effect, but neither non-match prime led to a significant priming effect (ts19 < 0.75, p > 0.48). For basic probe trials, there were no significant effects of priming on sensitivity (Fs1,19 < 0.277, ps > 0.6). Similarly, no significant priming effects were revealed with planned comparisons (ts19 < 1.5, ps > 0.15).

An additional post-hoc analysis was conducted to investigate the role of repeated categorization at the same level of abstraction beyond simply examining the previous trial. Median correct response times for superordinate and basic-level categorization were analyzed as a function of the number of immediately preceding trials at the same level of abstraction, a post-hoc factor we refer to as “run length” (e.g., superordinate categorization RT for a run length of 2 is the average RT for all correct superordinate categorization trials that were preceded by one superordinate categorization). This analysis disregarded the designation of trial pairs and instead relied on searching through each participant's pseudo-randomized trial sequence for runs of superordinate and basic-level categorizations of different run lengths. There was sufficient data (greater than or equal to 5 data points on average per participant) to examine run lengths of 1, 2, 3, and 4. Figure 8 plots superordinate and basic categorization RTs as a function of run length.

An external file that holds a picture, illustration, etc.
Object name is nihms-674909-f0008.jpg

Median correct “yes” response times for superordinate and basic categorization as a function of the number of repeated trials at the same level of abstraction (e.g., the superordinate value plotted at run length of 4 are the average correct RTs for trials that were preceded by 3 superordinate categorization trials). Error bars represent 95% confidence intervals from the interaction of category level (superordinate vs. basic) and run length.

A 2 × 4 analysis of variance with Category Level (superordinate vs. basic) and Run Length (1, 2, 3, and 4) as within-subject factors was conducted on correct RTs. As shown in Figure 8, basic-level categorization was faster than superordinate categorization (main effect of Category Level: F1,19 = 21.12, MSE = 2358.3, p = 0.0002, ηp2 = 0.526) and there was no main effect of Run Length (F3,57 = 0.633, MSE = 1632.4, p = 0.597, ηp2 = 0.032). But these factors significantly interacted (F3,57 = 4.28, MSE = 1914.2, p = 0.0086, ηp2 = 0.184) such that superordinate categorizations were faster with longer run lengths while basic-level categorizations did not change significantly (although trended towards slower responses with longer run lengths); Wilcoxon rank sign tests corrected for false discovery rate provided converging evidence of the interaction with faster basic than superordinate categorization at shorter run lengths (1: W = 0, p < 0.0001, difference CI = [40.3, 74.9]; 2: W =4, p = 0.0004, difference CI = [32.1, 72.9]), a marginal effect with run length 3 (W = 49, p = 0.05, difference CI = [3.15, 62.1]), but no difference across category level at run length 4 (W =96, p = 0.751, difference CI = [−49.5, 42.8]).

Discussion

When experimental context focuses singly on the superordinate or basic level during ultra-rapid categorization, superordinate categorization is as fast or faster than basic-level categorization (Fabre-Thorpe, 2011; Macé et al., 2009; Thorpe et al. 1996), a finding that contrasts with the classic basic-level advantage. The results of Experiment 5 offer some insights into why by examining how local category context affects the relative speed of categorization of briefly exposed objects at different levels of abstraction.

Let us first consider the relative speed of categorization regardless of the priming type. On average, basic-level categorization was significantly faster than superordinate categorization. This was true for responses following a baseline trial (i.e., an unrelated digit parity judgment) and a prime trial (i.e., another categorization judgment), suggesting that even when image exposure is ultra-rapid, much like the case when exposure is unlimited, the basic-level advantage is the default outcome.

Turning to the more detailed analysis of priming types, we found that basic-level categorization was relatively robust to local variation in experimental context. Whether the previous prime trial was a superordinate categorization or basic-level categorization did not influence the speed of basic-level categorization on the probe trial; while responses were somewhat faster when a “yes” response to the probe followed a “yes” response to the prime, this was merely at the level of overlapping responses from prime to probe, not overlapping categorizations. The relative lack of priming effects on basic-level categorization is consistent with the notion of a representational advantage for basic-level categories (Joliceour et al., 1984; Murphy & Smith, 1982; Richler et al., 2011; Rosch et al., 1976).

In contrast, superordinate categorization was significantly affected depending on the type of prime. A matching superordinate or basic-level prime both led to faster superordinate categorization of the probe. These results are consistent with previous reports of facilitated processing of targets due to unconscious primes from the same general level categories (e.g., natural vs. artifact; Dell'Aqua & Grainger, 1999). The facilitation we observed is consistent with a spreading activation account (Meyer & Schvaneveldt, 1971) whereby the activation of the relevant perceptual representations and connections to the animal concept from the prime trial facilitates a subsequent superordinate categorization (Marsolek, 2008; McNamara, 2005). Interestingly, non-matching basic prime trials (e.g., categorizing “dog” but briefly shown a picture of a bird) significantly slowed subsequent superordinate categorization. This inhibition is potentially the result of so-called “antipriming” (Marsolek, 2008), the notion that the more the internal representations of two objects overlap, the greater these representations will interfere with each other. In this case, the representational overlap between dogs and birds coupled with activated bird representations during the prime trial may have led to inhibited processing for the subsequent animal categorization.

Finally, a post-hoc analysis of repeated categorizations at the same level revealed that the basic-level advantage in response times was largely eliminated after only four repeated trials of superordinate-level categorization. Statistically, the elimination was due largely to a decrease in superordinate categorization RT, a finding consistent with what was observed in Experiment 4. One possibility is that this RT speedup in superordinate categorization results from a transition from mediated processing through semantic knowledge to more direct retrieval of perceptual representations in episodic memory (Logan, 1988; Nosofsky & Palmeri, 1997; Palmeri, 1997). Also extending the results of Experiment 4, the increased efficiency in superordinate categorization due to repetitions of target category was short lived. Recently stored representations of superordinate categories may be available for fast subsequent superordinate categorization, but only for a limited window of time.

General Discussion

The present article explored a puzzle of visual object categorization: you usually spot the bird fastest, but at a glance, you spot the animal faster. Why does the relative timing of object categorizations at different levels of abstraction vary considerably under speeded category verification with unlimited exposure (Rosch et al., 1976) versus ultra-rapid categorization with brief exposure (Thorpe et al., 1996)?

The relative timing of categorization at different levels of abstraction has long been a foundational empirical result in theoretical debates about the mechanisms that underlie object recognition and categorization (e.g., Fabrè-Thorpe, 2011; Grill-Spector & Kanwisher, 2005; Jolicoeur et al., 1984; Mack et al., 2008; Palmeri & Gauthier, 2004; Palmeri et al., 2004; Thorpe et al., 1996), the role of learning and expertise in shaping perception and conception (Gauthier, Tarr, & Bub, 2009; Palmeri et al., 2004; Tanaka & Taylor, 1991), and the development of perceptual and conceptual knowledge (Mandler et al., 1991; Rosch et al., 1976). A common theoretical position running through many of these debates is that certain levels of abstraction are faster, better, first because they are mechanistically and representationally primary in some way – they are accessed first, they logically precede other levels, they develop first. But if what is considered faster, better, first depends critically on how categorization is probed experimentally, then a prerequisite for applying these results theoretically is to understand why they differ empirically.

Perhaps the most well known manipulation that significantly affects the relative speed of categorization at different levels is exposure duration (e.g., Fabre-Thorpe, 2011; Macé et al., 2009). Explanations for why variation in exposure duration causes variation in the relative speed of categorization at different levels of abstraction often stem from considering the time course of perceptual processing. For example, consider predictions of a PDP-based model of semantic knowledge proposed by Rogers and McClelland (2004; Rogers & McClelland, 2008). According to this model, the internal representation of an object follows an evolving coarse to fine trajectory over time (Rogers & Patterson, 2007). Given unlimited or relatively long exposure duration, the final representation reached by that trajectory favors basic-level categorizations over more superordinate categorizations, as we observed in Experiments 1 and 2. However, by limiting time, either by limiting the time available to make a response (Rogers & Patterson, 2007) or by limiting exposure duration of the object (see Experiments 1-3), it is possible that only coarse representations have been reached along the temporally evolving trajectory and those representations favor superordinate categorizations over basic-level categorizations.

Coarse-to-fine activation of object representations over time is also consistent with the extended generalized context model (EGCM) of Lamberts (1998, 2000; Lamberts & Freeman, 1999; see also Cohen & Nosofsky, 2003). According to EGCM, perceptual representations emerge stochastically and are built up over time, with salient object dimensions included in representations at faster rates than less salient, but potentially more diagnostic, object dimensions. If coarse object features are generally more salient, and these coarse features are available at faster rates than fine features (see also Schyns & Oliva, 1994), then EGCM may predict a relative advantage for superordinate categorization with brief exposure that diminishes with longer exposure (Lamberts & Freeman, 1999).

It would be theoretically simple if the time course of perceptual processing by itself predicted whether or not there was a basic-level advantage in object categorization. However, we found that target category context significantly modulated those dynamics. For example, in Experiments 1 and 4, only when superordinate categorizations were blocked did the basic-level advantage disappear. And as we observed in Experiment 5, this modulation of superordinate categorization unfolds fairly quickly. The previous categorization trial can significantly influence a current superordinate categorization but has relatively little effect on a current basic-level categorization. With a short sequence of only four superordinate categorizations in a row, superordinate categorization became as fast as basic-level categorization, temporarily eliminating the classic basic-level advantage within an otherwise mixed category context. Why might superordinate categorization be sensitive to category context when basic-level categorization is not?

At a minimum, this context dependence suggests an asymmetry in representations of superordinate and basic-level categories. For example, perceptual categorization requires selective attention to diagnostic dimensions (Kruschke, 1992; Nosofsky, 1986), and different dimensions may be diagnostic for superordinate versus basic-level categorization (Palmeri, 1999). Imagine that the pattern of selective attention to dimensions for superordinate categorization must be allocated flexibly. Then similar categorizations from trial to trial might benefit superordinate categorizations because they demand the same pattern of selective attention. When the superordinate categorization changes, a new pattern of selective attention to dimensions must be established (see also Logan & Gordon, 2001). By contrast, imagine that the pattern of selective attention to dimensions for basic-level categorization is the default (Richler et al., 2011), perhaps embodied in visual representations themselves (Folstein et al., 2012, 2013; Gauthier & Palmeri, 2002). Then, whether or not there are similar categorizations from trial to trial might have relatively little impact on basic-level categorization.

Another potential source for this context dependence may be found in accounts of antipriming (Marsolek, 2008). Many theories suggest that object categories are represented in sparse, distributed neural representations with individual exemplars activating largely overlapping activation patterns consistent with their category membership (e.g., Haxby, Goblin, Furey, Ishai, Schouten, & Pietrini, 2001; Rolls & Tovee, 1995). Antipriming is described as a form of inhibition for processing a current object that has overlapping representations with recently processed objects. This inhibition is hypothesized to arise from constantly evolving visual and conceptual representations caused by ever-present error-driven or Hebbian learning mechanisms (Marsolek et al., 2010). In a mixed category context, sequences of superordinate categorizations support one another, leading to faster responses. But just a single intervening basic-level categorization in this mixed context may sufficient to significantly inhibit a subsequent superordinate categorization. Intervening superordinate categorizations may have less effect on basic-level categorizations because basic-level category representations are less influenced by local learning mechanisms. There is a representational asymmetry, with superordinate categories potentially more malleable based on local context compared to basic-level categories.

What is clear from the current study is that neither the time course of perceptual processing nor the context of the categorization task alone sufficiently explain the speed of categorization at different levels of abstraction. Rather, it is the interaction of these factors that fully predict when categorization at one level will be faster than another level. In its default state, the visual categorization system seems biased towards basic-level categorization. But, with limited perceptual processing, which allows for relatively better encoding of coarser perceptual features, and an established attentional set and/or activated superordinate category representations from previous repetitions of similar categorizations, it may be relatively more efficient to categorize an object as an animal (superordinate) than a dog (basic).

We should note that an explicit design goal of our work was to aim to bridge between the paradigms of speeded category verification and ultra-rapid categorization, borrowing not only their experimental designs, but also using stimulus categories commonly used in those studies. In any experiment using real-world stimuli, there is a choice to be made regarding what objects and what object categories to use. We largely circumvented any explicit decision by using many of the categories that have been used in past work. Of course, with a limited sampling of object categories, there is an inherent limitation in the generalizability of our experimental findings since the effects we observed might not apply to all stimulus classes. For example, the superordinate category of animal may have perceptual features that are more diagnostic than other superordinate categories. It has been shown that simple natural vs. man-made/artifact categorizations akin to the superordinate categorizations in the current study can be performed based on the features of global structure found in low-spatial frequency information in scenes (Schyns & Oliva, 1994) and global shape contours of objects (e.g., curvilinearity vs. rectilinearity; Levin, Takarae, Miner, & Keil, 2001). It is possible that other superordinate categories are not as easily discriminated by these sort of perceptual features and that the current findings would not necessarily generalize to categorizing these stimulus classes.

It is also important to note that the similarity between the contrasting categories is a critical factor in the speed of categorization decisions. Deciding between more (dog vs. cat) or less (dog vs. bird) similar basic-level categories can make for faster or slower categorizations (Bowers and Jones, 2008; Macé et al., 2009; Mack & Palmeri 2010a). In the current study, we specifically investigated the type of superordinate (e.g., animal vs. means of transportation) and basic-level (e.g., dog vs. bird) categorizations that have been used in prior research and clearly demonstrate the speed differences between the two paradigms of interest (e.g., Macé et al., 2009; Rosch et al., 1976). By building on the existing literatures, we can offer a new empirical and theoretical starting point for reconciling the differences between speeded category verification and ultra-rapid categorization.

One recent study of ultra-rapid categorization targeted the role of exposure duration in categorizing at different levels of abstraction (Poncet & Fabre-Thorpe, 2014). In this study, an advantage for superordinate categorization was observed across exposure durations of 25, 250, and 500ms in a paradigm consistent with the typical ultra-rapid categorization task. In other words, even with longer exposure durations, by continuously repeating categorizations over many trials (200), Poncet and Fabre-Thorpe found a superordinate-level advantage. The discrepancy between the results of Experiment 2 in the current study (a basic-level advantage with exposure duration of 250ms) and the superordinate advantage observed by Poncet and Fabre-Thorpe requires further empirical investigation. But one possibility is that the time course of perceptual encoding may play a smaller role in the speed of categorization decisions when the experimental context with many repetitions of the same categorization allows for a very well established attentional set and/or strongly activated category representations trial-to-trial. Together, the Poncet and Fabre-Thorpe findings and the results of the current study suggest that context plays an influential role in category decision making (Palmeri & Mack, 2015).

On the one hand, we do not want the fact that the superordinate categorization appears to depend on local category context to be used to discount findings from ultra-rapid categorization experiments. It may well be that the superordinate advantage only emerges when exposure is brief and category context is blocked, as our results suggest, but often it is just as critical to demonstrate that something can happen as it is to document what usually happens (Mook, 1983). The fact that superordinate categorization can be as fast or faster than basic-level categorization supports the hypothesis that superordinate categorization does not depend on a basic-level categorization happening first (Fabre-Thorpe, 2011). Categorization at different levels of abstraction should not be characterized in terms of requisite stages of categorization (Palmeri et al., 2004). It also supports the hypothesis that perceptual information available with brief exposure supports superordinate categorization more than basic-level categorization, even if this must be conditionalized experimentally on the local category context.

On the other hand, it is also clear that superordinate categorization should not be considered something akin to a default mode of categorization, that superordinate categorization has primacy over basic-level categorization, or that superordinate categorizations are made on an initial sweep through the visual system, since its time course is so dependent on local trial context. If superordinate categorizations first emerged over the time course of categorizing any object, then it seems plausible that a superordinate advantage with brief exposure would be observed irrespective of local trial context. That does not seem to be the case. While it is possible to create conditions where the basic-level advantage is eliminated, it is not simply the case that limiting exposure is sufficient to do so under all conditions or contexts. Neither basic-level nor superordinate categorization has structural primacy. Perceptual representations evolve over time within the visual processing hierarchy, but that does not necessarily mean that categorization unfolds within a hierarchy as well.

Acknowledgements

This article is based on a doctoral dissertation submitted to Vanderbilt University. This research was funded by the Temporal Dynamics of Learning Center (SMA-1041755), an NSF funded Science of Learning Center, NSF grant BCS-1257098, a grant from the James S. McDonnell Foundation, and NIH grant F32-MH100904. A special thanks to M.L.M.'s mentor and co-author, Thomas Palmeri, for his generous support and encouragement during M.L.M.'s graduate school career. Thank you to the members of M.L.M.'s dissertation committee, Isabel Gauthier, Dan Levin, and Aude Oliva, for helpful comments and criticisms on this work. Additionally, Jennifer Richler provided much support and many useful suggestions throughout all aspects of the presented work. We also thank Justin Barisich and Laura Stelianou for assistance in testing participants in these studies.

References

  • Bacon-Macé N, Macé MJ, Fabre-Thorpe M, Thorpe SJ. The time course of visual processing: Backward masking and natural scene categorisation. Vision Research. 2005;45(11):1459–1469. [Abstract] [Google Scholar]
  • Bacon-Macé N, Kirchner H, Fabre-Thorpe M, Thorpe SJ. Effects of task requirements on rapid natural scene processing: From common sensory encoding to distinct decisional mechanisms. Journal of Experimental Psychology: Human Perception and Performance. 2007;33(5):1013–1026. [Abstract] [Google Scholar]
  • Bar M. Visual objects in context. Nature Reviews: Neuroscience. 2004;5:617–629. [Abstract] [Google Scholar]
  • Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B. 1995;57(1):289–300. [Google Scholar]
  • Bowers JS, Jones KW. Detecting objects is easier than categorizing them. Quarterly Journal of Experimental Psychology. 2008;61:552–557. [Abstract] [Google Scholar]
  • Breitmeyer B, Ogmen H. Visual masking: Time slices through conscious and unconscious vision. 2nd ed Oxford University Press; New York, NY: 2006. [Google Scholar]
  • Calrson TA, Simmons RA, Kriegeskorte N, Slevc LR. The Emergence of Semantic Meaning in the Ventral Temporal Pathway. Journal of Cognitive Neuroscience. 2013;26(1):120–131. [Abstract] [Google Scholar]
  • Cohen A, Nosofsky RM. An extension of the exemplar-based random walk model to separable-dimension stimuli. Journal of Mathematical Psychology. 2003;47:150–165. [Google Scholar]
  • Dailey MN, Cottrell GW. Organization of face and object recognition in modular neural networks. Neural Networks. 1999;12(7):1053–1074. [Abstract] [Google Scholar]
  • Dell'Acqua R, Grainger J. Unconscious semantic priming from pictures. Cognition. 1999;73:1–15. [Abstract] [Google Scholar]
  • Delorme A, Rousselet GA, Macé MJ-M, Fabre-Thorpe M. Interaction of top-down and bottom-up processing in the fast visual analysis of natural scenes. Cognitive Brain Research. 2004;19(2):103–113. [Abstract] [Google Scholar]
  • Edelman S. Representation and recognition in vision. MIT Press; Cambridge, MA: 1999. [Google Scholar]
  • Fabre-Thorpe M, Delorme A, Marlot C, Thorpe SJ. A limit to the speed of processing in ultra-rapid visual categorisation of novel natural scenes. Journal of Cognitive Neuroscience. 2001;13:171–180. [Abstract] [Google Scholar]
  • Fabrè-Thorpe M. The characteristics and limits of rapid visual categorization. Frontiers in Perception Science. 2011;2:243. [Europe PMC free article] [Abstract] [Google Scholar]
  • Farah MJ. Visual Agnosia: Disorders of Object Recognition and What They Tell Us About Normal Vision. The MIT Press; Cambridge, MA: 1990. [Google Scholar]
  • Fize D, Fabre-Thorpe M, Richard G, Doyon B, Thorpe SJ. Rapid categorization of foveal and extrafoveal natural images: Associated ERPs and effects of lateralization. Brain and Cognition. 2005;59(2):145–158. [Abstract] [Google Scholar]
  • Folstein J, Gauthier I, Palmeri TJ. Not all morph spaces stretch alike: How category learning affects object perception. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2012;38(4):807–820. [Europe PMC free article] [Abstract] [Google Scholar]
  • Folstein J, Palmeri TJ, Gauthier I. Category learning increases discriminability of relevant object dimensions in visual cortex. Cerebral Cortex. 2013;23(4):814–823. [Europe PMC free article] [Abstract] [Google Scholar]
  • Freedman DJ, Riesenhuber M, Poggio T, Miller EK. Categorical representation of visual stimuli in the primate prefrontal cortex. Science. 2001;291:312–316. [Abstract] [Google Scholar]
  • Gauthier I, Skudlarski P, Gore JC, Anderson AW. Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience. 2000;3(2):191–197. [Abstract] [Google Scholar]
  • Gauthier I, Tarr MJ, Bub D. Perceptual Expertise: Bridging Brain and Behavior. Oxford University Press; 2009. [Google Scholar]
  • Gauthier I, Palmeri TJ. Visual neurons: Categorization-based selectivity. Current Biology. 2002;12:R282–R284. [Abstract] [Google Scholar]
  • Grill-Spector K, Kanwisher N. Visual recognition: As soon as you know it is there you know what it is. Psychological Science. 2005;16:152–160. [Abstract] [Google Scholar]
  • Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 2001;293(5539):2425–30. [Abstract] [Google Scholar]
  • Hodges JR, Graham N, Patterson K. Charting the progression in semantic dementia: Implications for the organisation of semantic memory. Memory. 1995;3:463–495. [Abstract] [Google Scholar]
  • Johnson KE, Mervis CB. Effects of varying levels of expertise on the basic-level of categorization. Journal of Experimental Psychology: General. 1997;126(3):248–277. [Abstract] [Google Scholar]
  • Jolicoeur P, Gluck MA, Kosslyn SM. Pictures and names: Making the connection. Cognitive Psychology. 1984;16(2):243–275. [Abstract] [Google Scholar]
  • Jones M, Curran T, Mozer MC, Wilder MH. Sequential effects in response time reveal learning mechanisms and event representations. Psychological Review. 2013;120:628–666. [Abstract] [Google Scholar]
  • Kirchner H, Thorpe SJ. Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research. 2006;46(11):1762–1776. [Abstract] [Google Scholar]
  • Kruschke JK. ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review. 1992;99(1):22–44. [Abstract] [Google Scholar]
  • Lamberts K. The time course of categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1998;24:695–711. [Google Scholar]
  • Lamberts K. Information-accumulation theory of speeded categorization. Psychological Review. 2000;107:227–260. [Abstract] [Google Scholar]
  • Lamberts K, Freeman RPJ. Categorization of briefly presented objects. Psychological Research. 1999;62:107–117. [Google Scholar]
  • Levin DT, Takarae Y, Miner A, Keil FC. Efficient visual search by category: Specifying the features that mark the difference between artifacts and animals in preattentive vision. Perception and Psychophysics. 2001;63:676–697. [Abstract] [Google Scholar]
  • Logan GD. Toward an instance theory of automatization. Psychological Review. 1988;95(4):492–527. [Google Scholar]
  • Logan GD, Gordon RD. Executive control of visual attention in dual-task situations. Psychological Review. 2001;108:393–434. [Abstract] [Google Scholar]
  • Logan GD, Schneider DW, Bundesen C. Still clever after all these years: Searching for the homunculus in explicitly cued task switching. Journal of Experimental Psychology: Human Perception and Performance. 2007;33:978–994. [Abstract] [Google Scholar]
  • Macé MJ-M, Joubert OR, Nespoulous J-L, Fabre-Thorpe M. Time-course of visual categorizations: You spot the animal faster than the bird. PLoS:ONE. 2009;4(6):e5927. [Europe PMC free article] [Abstract] [Google Scholar]
  • Mack ML, Gauthier I, Sadr J, Palmeri TJ. Object detection and basic-level categorization: Sometimes you know it is there before you know what it is. Psychonomic Bulletin & Review. 2008;15:28–35. [Abstract] [Google Scholar]
  • Mack ML, Palmeri TJ. Decoupling object detection and categorization. Journal of Experimental Psychology: Human Perception and Performance. 2010a;36:1067–1079. [Abstract] [Google Scholar]
  • Mack ML, Palmeri TJ. The speed of categorization: A priority for people? Journal of Vision. 2010b;10(7):988. [Google Scholar]
  • Mack ML, Palmeri TJ. The timing of visual object categorization. Frontiers in Perception Science. 2011;2:165. [Europe PMC free article] [Abstract] [Google Scholar]
  • Mack ML, Preston AR, Love BC. Decoding the brain's algorithm for categorization from its neural implementation. Current Biology. 2013;23:2023–2027. [Europe PMC free article] [Abstract] [Google Scholar]
  • Mack ML, Wong AC-N, Gauthier I, Tanaka JW, Palmeri TJ. Time course of visual object categorization: Fastest does not necessarily mean first. Vision Research. 2009;49:1961–1968. [Abstract] [Google Scholar]
  • Mandler JM, Bauer PJ, McDonough L. Separating the sheep from the goats: Differentiating global categories. Cognitive Psychology. 1991;23:263–298. [Google Scholar]
  • Mandler JM, McDonough L. Advancing Downward to the Basic Level. Journal of Cognition and Development. 2000;1(4):379–403. [Google Scholar]
  • Marsolek CJ. Dissociable neural subsystems underlie abstract and specific object recognition. Psychological Science. 1999;107:111–118. [Google Scholar]
  • Marsolek CJ. What antipriming reveals about priming. Trends in Cognitive Sciences. 2008;12(5):176–181. [Abstract] [Google Scholar]
  • Marsolek CJ, Deason RG, Ketz NA, Ramanathan P, Bernat EM, Steele VR, Patrick CJ, et al. Identifying objects impairs knowledge of other objects: a relearning explanation for the neural repetition effect. NeuroImage. 2010;49(2):1919–1932. [Europe PMC free article] [Abstract] [Google Scholar]
  • McNamara TP. Semantic priming: Perspectives from memory and word recognition. Psychology Press; New York: 2005. [Google Scholar]
  • Mervis C, Rosch E. Categorization of natural objects. Annual Review of Psychology. 1981;32:89–113. [Google Scholar]
  • Meyer DE, Schvaneveldt RW. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology. 1971;90:227–234. [Abstract] [Google Scholar]
  • Mook DG. In defense of external invalidity. American Psychologist. 1983;38:379–387. [Google Scholar]
  • Murphy GL, Brownell HH. Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1985;11(1):70–84. [Abstract] [Google Scholar]
  • Murphy GL, Smith EE. Basic level superiority in picture categorization. Journal of Verbal Learning and Verbal Behavior. 1982;21:1–20. [Google Scholar]
  • Nosofsky RM. Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General. 1986;115(1):39–57. [Abstract] [Google Scholar]
  • Nosofsky RM, Palmeri TJ. An exemplar-based random walk model of speeded classification. Psychological Review. 1997;104(2):266–300. [Abstract] [Google Scholar]
  • Oliva A, Schyns PG. Coarse blobs or fine edges? Evidence that information diagnosticity changes the perception of complex visual stimuli. Cognitive Psychology. 1997;34:72–107. [Abstract] [Google Scholar]
  • Palmeri TJ. Exemplar similarity and the development of automaticity. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1997;23(2):324–354. [Abstract] [Google Scholar]
  • Palmeri TJ. Learning hierarchically structured categories: A comparison of category learning models. Psychonomic Bulletin & Review. 1999;6:495–503. [Abstract] [Google Scholar]
  • Palmeri TJ, Mack ML. How experimental trial context affects perceptual categorization. Frontiers in Psychology. 2015;6:180. [Europe PMC free article] [Abstract] [Google Scholar]
  • Palmeri TJ, Tarr M. Visual object perception and long-term memory. In: Luck S, Hollingworth A, editors. Visual Memory. Oxford University Press; 2008. pp. 163–207. [Google Scholar]
  • Palmeri TJ, Wong AC-N, Gauthier I. Computational approaches to the development of perceptual expertise. Trends in Cognitive Science. 2004;8:378–386. [Abstract] [Google Scholar]
  • Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience. 2007;8:976–987. [Abstract] [Google Scholar]
  • Poncet M, Fabre-Thorpe M. Stimulus duration and diversity do not matter: the animal is seen before the bird. The European Journal of Neuroscience. 2014;39(9):1508–1516. [Abstract] [Google Scholar]
  • Pouget P, Logan GD, Palmeri TJ, Boucher L, Paré M, Schall JD. Neural basis of adaptive respnose time adjustment during saccade countermanding. Journal of Neuroscience. 2011;31:12604–12612. [Europe PMC free article] [Abstract] [Google Scholar]
  • Richler JJ, Gauthier I, Palmeri TJ. Automaticity of basic-level categorization accounts for naming effects in recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2011;37(6):1579–1587. [Abstract] [Google Scholar]
  • Richler JJ, Palmeri TJ. Visual category learning. Wiley Interdisciplinary Reviews in Cognitive Science. 2014;5:75–94. [Abstract] [Google Scholar]
  • Riesenhuber M, Poggio T. Models of object recognition. Nature Neuroscience. 2000;3:1199–1204. [Abstract] [Google Scholar]
  • Rogers TT, McClelland JL. Semantic cognition: A parallel distributed processing approach. MIT Press; Cambridge, MA: 2004. [Google Scholar]
  • Rogers TT, McClelland J. Precis of semantic cognition: A parallel distributed processing approach. Behavioral and Brain Sciences. 2008;31(6):689–714. [Google Scholar]
  • Rogers TT, Patterson K. Object categorization: Reversals and explanations of the basic-level advantage. Journal of Experimental Psychology: General. 2007;136:451–469. [Abstract] [Google Scholar]
  • Rolls ET, Tovee MJ. Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. Journal of Neurophysiology. 1994;72(2):713–726. [Abstract] [Google Scholar]
  • Rolls ET, Tovee MJ, Panzeri S. The neurophysiology of backward visual masking: Information analysis. Journal of Cognitive Neuroscience. 1999;11(3):300–311. [Abstract] [Google Scholar]
  • Rosch E. Principles of categorization. In: Rosch E, Lloyd BB, editors. Cognition and Categorization. Lawrence Erlbaum Associates; Hillsdale: 1978. pp. 27–48. [Google Scholar]
  • Rosch E, Mervis CB, Gray W, Johnson D, Boyes-Braem P. Basic objects in natural categories. Cognitive Psychology. 1976;8:382–439. [Google Scholar]
  • Rousselet GA, Macé MJ-M, Fabre-Thorpe M. Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes. Journal of Vision. 2003;3(6):440–456. [Abstract] [Google Scholar]
  • Rousselet GA, Macé MJ-M, Thorpe S, Fabre-Thorpe M. Limits of ERP differences in tracking object processing speed. Journal of Cognitive Neuroscience. 2007;19(8):1241–1258. [Abstract] [Google Scholar]
  • Scott LS, Tanaka JW, Sheinberg DL, Curran T. The role of category learning in the acquisition and retention of perceptual expertise: A behavioral and neurophysiological study. Brain Research. 2008;1210:204–215. [Abstract] [Google Scholar]
  • Serre T, Oliva A, Poggio T. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Science. 2007;104:6424–6429. [Europe PMC free article] [Abstract] [Google Scholar]
  • Sigala N, Logothetis NK. Visual categorization shapes feature selectivity in the primate temporal cortex. Nature. 2002;415:318–320. [Abstract] [Google Scholar]
  • Smith EE, Shoben EJ, Rips LJ. Structure and process in semantic memory: A featural model for semantic decisions. Psychological Review. 1974;81:214–241. [Google Scholar]
  • Stewart N, Brown GDA, Chater N. Absolute identification by relative judgment. Psychological Review. 2005;112:881–911. [Abstract] [Google Scholar]
  • Tanaka JW, Taylor M. Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology. 1991;23(3):457–482. [Google Scholar]
  • Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature. 1996;381:520–522. [Abstract] [Google Scholar]
  • VanRullen R, Koch C. Visual selective behavior can be triggered by a feed-forward process. Journal of Cognitive Neuroscience. 2003;15(2):209–217. [Abstract] [Google Scholar]
  • VanRullen R, Thorpe SJ. Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception. 2001a;30:655–668. [Abstract] [Google Scholar]
  • VanRullen R, Thorpe SJ. The time course of visual processing: From early perception to decision-making. Journal of Cognitive Neuroscience. 2001b;13:454–461. [Abstract] [Google Scholar]
  • Wickelgren WA, Corbett AT. Associative interference and retrieval dynamics in yes-no recall and recognition. Journal of Experimental Psychology: Human Learning and Memory. 1977;3(2):189–202. [Google Scholar]
  • Wong AC-N, Palmeri TJ, Gauthier I. Conditions for face-like expertise with objects: Becoming a Ziggerin expert – but which type? Psychological Science. 2009;20(9):1108–1117. [Europe PMC free article] [Abstract] [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Alternative metrics

Altmetric item for https://www.altmetric.com/details/3969845
Altmetric
Discover the attention surrounding your research
https://www.altmetric.com/details/3969845

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1037/a0039184

Supporting
Mentioning
Contrasting
5
46
1

Article citations


Go to all (21) article citations

Funding 


Funders who supported this work.

James S. McDonnell Foundation

    NIMH NIH HHS (2)

    National Institutes of Health (1)

    National Science Foundation (1)

    Temporal Dynamics of Learning Center (1)