Novel-view scene recognition relies on identifying spatial reference directions
Section snippets
Experiment 1
In Experiment 1, we replicated the experiments of Simons and Wang (1998). After viewing an array of five objects for three seconds, participant walked to a new viewing position 49° from the learning position or stayed at the learning position while blindfolded. One object was moved to a new location after participants were blindfolded. Ten seconds after they put on the blindfold, participants were asked to remove the blindfold and indicate which object was moved.
Experiment 2
In Experiment 2, a short chopstick coated with phosphorescent paint was placed at the center of the table with an angle 49° counterclockwise away from the study viewpoint (Fig. 3) for the novel test view group and 0° from the study viewpoint for the familiar test view group so that the chopstick would point to the test viewpoint for both groups. We investigated whether change detection at a novel view caused by the table rotation was as accurate as change detection at a novel view caused by the
Experiment 3
In Experiment 3, a short chopstick coated with phosphorescent paint was placed at the center of the table pointing to the study viewpoint during the testing phase. We investigated whether performance in position change detection at a novel view was no less accurate when the novel view was caused by table rotation than when the novel view was caused by observer locomotion.
Experiment 4
In Experiment 4, the distance between the study view and the novel was 98° instead of 49° in Experiment 1. Farrell and Robertson (1998, p. 229) reported a linear increase of errors in pointing to objects as a function of rotation magnitude in the updating condition, indicating that errors of updating one’s position and orientation accumulate over greater distances. Accordingly we assumed that the inaccuracy in updating the self with respect to the spatial reference direction of the scene
Experiment 5
In Experiments 2 Experiment 2, 3 Experiment 3, participants were instructed on the to-be-tested viewpoint or the study viewpoint for both table rotation and table stationary conditions. Participants might have been able to use the chopstick cue when the novel view was caused by their locomotion. We were therefore not able to examine the relative facilitative effects of knowledge about the to-be-tested viewpoint, knowledge about the study viewpoint, and locomotion information on change
Experiment 6
In the previous experiments, a chopstick was placed either only at study (Experiment 2 and the indicating test viewpoint condition in Experiment 5) or only at test (Experiment 3 and the indicating study viewpoint condition in Experiment 5). These manipulations were included to reduce the likelihood that the chopstick would influence scene recognition as a reference object. When a chopstick was only presented at study, other objects might be coded with respect to the chopstick at study, but that
General discussion
The goal of this project was to investigate whether observer locomotion, compared with table rotation, provides unique information facilitating novel-view scene recognition. The findings of the experiments lead to a negative answer. Novel-view scene recognition was as accurate in the table rotation condition as in the observer locomotion condition when the to-be-tested viewpoint was indicated during the study phase, when the study viewing direction was indicated during the test phase, and when
Acknowledgements
Preparation of this paper and the research reported in it were supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada and a grant from the National Natural Science Foundation of China (30770709) to W.M. and National Institute of Mental Health Grant 2-R01-MH57868 to T.P.M. We are grateful to Dr. Andrew Hollingworth, Dr. Daniel Simons, and one anonymous reviewer for their helpful comments on a previous version of this manuscript.
References (25)
- et al.
Orientational manoeuvres in the dark: Dissociating allocentric and egocentric influences on spatial memory
Cognition
(2004) - et al.
Intrinsic frames of reference and egocentric viewpoints in scene recognition
Cognition
(2008) - et al.
Reference directions and reference objects in spatial memory of a briefly-viewed layout
Cognition
(2008) - et al.
Systems of spatial reference in human memory
Cognitive Psychology
(2001) - et al.
Human spatial representation: Insights from animals
Trends in Cognitive Sciences
(2002) Spatial cognition and the brain
- et al.
Remembering the past and imagining the future: A neural model of spatial memory and imagery
Psychological Review
(2007) - et al.
View dependence in scene recognition after active learning
Memory and Cognition
(1999) - et al.
Extrinsic cues aid shape recognition from novel viewpoints
Journal of Vision
(2003) - et al.
Viewpoint dependence in scene recognition
Psychological Science
(1997)
Mental rotation and the automatic updating of body-centered spatial relationships
Journal of Experimental Psychology: Learning, Memory, and Cognition
Allocentric coding of object-to-object relations in overlearned and novel environments
Journal of Experimental Psychology: Learning, Memory, and Cognition
Cited by (25)
The impact of adding perspective-taking to spatial referencing during human–robot interaction
2020, Robotics and Autonomous SystemsCompeting perspectives on frames of reference in language and thought
2018, CognitionCitation Excerpt :Thus, updating will likely be more difficult for the 180° setup, which required a greater distance traveled between tables and a greater turn than for the 90° setup. Additionally, being able to see the same landmarks in the environment at a second location will prime and strengthen the geocentric representation (e.g., Burgess et al., 2004; Etienne, Maurer, & Séguinot, 1996; Mou, Zhang, & McNamara, 2009; Nardini et al., 2006). In our study, with a 90° turn, although the children could no longer view the first table, they could still see the side of the room where they initially memorized the array.
Age differences in path learning: The role of interference in updating spatial information
2015, Learning and Individual DifferencesCitation Excerpt :Nowadays many researchers are carrying out spatial navigation studies using virtual reality, providing a better understanding of the spatial impairments both in normal or pathological aging (Akinlofa, O'Brian-Holt, & Elyan, 2014; Cohen & Hegarty, 2014; Martens & Antonenko, 2012). Given previous studies that point to a deterioration with age in updating spatial information (Harris & Wolbers, 2014), as well as the higher demand that implies combining prior with new information (Mou et al., 2009), we hypothesize that i) older adults will have a general worse performance compared with young and adults, ii) but at the same time all age groups would have a worse performance along the paths. A group of 20 young people (16 females and 4 males) (M = 21 years, SD = 0.56, range = 19–30 years), a group of 20 adults (17 females and 3 males) (M = 44.80 years, SD = 1.92, range = 31–55 years) and a group of 20 older adults (16 females and 4 males) (M = 64.15 years, SD = 1.49, range = 56–80 years) voluntarily took part in the experiment.
Retrieving enduring spatial representations after disorientation
2012, CognitionCitation Excerpt :Presumably body-object vectors are specified in terms of the observer’s body orientation, which changes as the person turns. Mou, McNamara, and their colleagues (Mou, McNamara, Valiquette, & Rump, 2004; Mou, Xiao, & McNamara, 2008; Mou, Zhang, & McNamara, 2009; Zhang et al., 2011) proposed that people have precise enduring spatial representations of objects’ locations that are organized with respect to a fixed reference direction. In a newer version of this theoretical framework, Zhang et al. (2011) proposed that when an individual learns a layout of objects, he or she represents an object’s location in terms of his or her body (as a special object) and/or in terms of other objects.