From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other’s mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But (...) are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world. (shrink)
Are two senses more certain than one? Subjective confidence, as an instance of metacognition, has mostly been investigated on a sense-by-sense basis. Yet perception is most frequently multisensory. Here we consider the implications and relevance of understanding confidence at the multisensory level.
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that (...) people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies. (shrink)
The last couple of years have seen a rapid growth of interest in the study of crossmodal correspondences – the tendency for our brains to preferentially associate certain features or dimensions of stimuli across the senses. By now, robust empirical evidence supports the existence of numerous crossmodal correspondences, affecting people’s performance across a wide range of psychological tasks – in everything from the redundant target effect paradigm through to studies of the Implicit Association Test, and from speeded discrimination/classification tasks through (...) to unspeeded spatial localisation and temporal order judgment tasks. However, one question that has yet to receive a satisfactory answer is whether crossmodal correspondences automatically affect people’s performance , as opposed to reflecting more of a strategic, or top-down, phenomenon. Here, we review the latest research on the topic of crossmodal correspondences to have addressed this issue. We argue that answering the question will require researchers to be more precise in terms of defining what exactly automaticity entails. Furthermore, one’s answer to the automaticity question may also hinge on the answer to a second question: Namely, whether crossmodal correspondences are all ‘of a kind’, or whether instead there may be several different kinds of crossmodal mapping . Different answers to the automaticity question may then be revealed depending on the type of correspondence under consideration. We make a number of suggestions for future research that might help to determine just how automatic crossmodal correspondences really are. (shrink)
What if a blind person could 'see' with her ears? Thanks to Sensory Substitution Devices (SSDs), blind people now have access to out-of-reach objects, a privilege reserved so far for the sighted. In this paper, we show that the philosophical debates have fundamentally been mislead to think that SSDs should be fitted among the existing senses or that they constitute a new sense. Contrary to the existing assumption that they get integrated at the sensory level, we present a new thesis (...) according to which they are not sensory, and get vertically integrated on the top of existing sensory abilities, from which they should be theoretically distinguished. (shrink)
Humans are poorer at identifying smells and communicating about them, compared to othersensory domains. They also cannot easily organise odour sensations in a general conceptual space like with colours. We challenge the conclusion that there is no olfactory conceptual map at all. Instead we propose a new framework, with local conceptual spaces.
Humans are poorer at identifying smells and communicating about them, compared to other sensory domains. They also cannot easily organize odor sensations in a general conceptual space, where geometric distance could represent how similar or different all odors are. These two generalities are more or less accepted by psychologists, and they are often seen as connected: If there is no conceptual space for odors, then olfactory identification should indeed be poor. We propose here an important revision to this conclusion: We (...) believe that the claim that there is no odor space is true only if by odor space, one means a conceptual space representing all possible odor sensations, in the paradigmatic sense used for instance for color. However, in a less paradigmatic sense, local conceptual spaces representing a given subset of odors do exist. Thus the absence of a global odor space does not warrant the conclusion that there is no olfactory conceptual map at all. Here we show how a localist account provides a new interpretation of experts and cross‐cultural categorization studies: Rather than being exceptions to the poor olfactory identification and communication usually seen elsewhere, experts and cross‐cultural categorization are here taken to corroborate the existence of local conceptual spaces. (shrink)
Synaesthesia is a strange sensory blending: synaesthetes report experiences of colours or tastes associated with particular sounds or words. This volume presents new essays by scientists and philosophers exploring what such cases can tell us about the nature of perception and its boundaries with illusion and imagination.
A strong claim, often found in the literature, is that it is impossible to categorize perceptual properties unless one possesses the related concepts. The evidence from visual perception reviewed in this paper however questions this claim: Concepts, at least canonically defined, are ill-suited to explain perceptual categorisation, which is a fast, and crucially a largely involuntary and unconscious process, which rests on quickly updated probabilistic calculations. I suggest here that perceptual categorisation rests on non-conceptual sorting principles. This changes the claim (...) that categorisation cannot occur without concepts: It does not preclude that the concepts remain necessary for categorisation, but opens the possibility that they are not and that those sorting principles could be here sufficient. (shrink)
Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of (...) the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues. (shrink)
Mental body-representations are highly plastic and can be modified after brief exposure to unexpected sensory feedback. While the role of vision, touch and proprioception in shaping body-representations has been highlighted by many studies, the auditory influences on mental body-representations remain poorly understood. Changes in body-representations by the manipulation of natural sounds produced when one's body impacts on surfaces have recently been evidenced. But will these changes also occur with non-naturalistic sounds, which provide no information about the impact produced by or (...) on the body? Drawing on the well-documented capacity of dynamic changes in pitch to elicit impressions of motion along the vertical plane and of changes in object size, we asked participants to pull on their right index fingertip with their left hand while they were presented with brief sounds of rising, falling or constant pitches, and in the absence of visual information of their hands. Results show an "auditory Pinocchio" effect, with participants feeling and estimating their finger to be longer after the rising pitch condition. These results provide the first evidence that sounds that are not indicative of veridical movement, such as non-naturalistic sounds, can induce a Pinocchio-like change in body-representation when arbitrarily paired with a bodily action. (shrink)
We provide a new account of the oft-mentioned special character of touch, showing that its superior reliability is subjective rather than objective : Touch provides higher certainty than vision, for the same level of objective accuracy.
Cognitive states, such as beliefs, desires and intentions, may influence how we perceive people and objects. If this is the case, are those influences worse when they occur implicitly rather than explicitly? Here we show that cognitive penetration in perception generally involves an implicit component. First, the process of influence is implicit, making us unaware that our perception is misrepresenting the world. This lack of awareness is the source of the epistemic threat raised by cognitive penetration. Second, the influencing state (...) can be implicit, though it can also be or become explicit. Being unaware of the content of the influencing state, we argue, does not make as much difference to the epistemic threat as it does to the epistemic responsibility of the agent. Implicit influencers cannot be examined for their accuracy and justification, and cannot be voluntarily accepted by the perceiver. Conscious awareness, however, is not sufficient for attributing blame to the agent. An equally important condition is the degree of control that they can exercise to change the contents that influence perception or stop their influence. Here we suggest that such control can also result from social influence, and that cognitive penetrability of perception is therefore also a social issue. (shrink)
Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during (...) multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration. (shrink)
A common sense view is illustrated by Doubting Thomas, and surfaces in many philosophical and psychological writings : Touching is better than seeing. But can we make sense of this privilege? We rule out that it could mean that touch is more informative than vision, more ‘objective’ or more directly in contact with reality. Instead, we propose that touch offers not a perceptual, but a metacognitive advantage: touch is not more objective than vision but rather provides comparatively higher subjective certainty.
This collection of essays brings together research on sense modalities in general and spatial perception in particular in a systematic and interdisciplinary way. It updates a long-standing philosophical fascination with this topic by incorporating theoretical and empirical research from cognitive science, neuroscience, and psychology. The book is divided thematically to cover a wide range of established and emerging issues. Part I covers notions of objectivity and subjectivity in spatial perception and thinking. Part II focuses on the canonical distal senses, such (...) as vision and audition. Part III concerns the chemical senses, including olfaction and gustation. Part IV discusses bodily awareness, peripersonal space, and touch. Finally, the volume concludes with Part V, on multimodality. Spatial Senses is an important contribution to the scholarly literature on the philosophy of perception that takes into account important advances in the sciences. (shrink)
The paper aims at reconsidering the problem of “practical knowledge” at a proper level of generality, and at showing the role that personal abilities play in it. The notion of “practical knowledge” has for long been the focus of debates both in philosophy and related areas in psychology. It has been wholly captured by debates about ‘knowledge’ and has more recently being challenged in its philosophical foundations as targeting a specific attitude of ‘knowing-how’. But what are the basic facts accounted (...) in the “knowing-how” debate? The problem is much more fundamental than knowledge: it addresses the need for an explanation of intelligent or guided behaviour, that could account for some distinctive aspects involved in the performance, but without positing too much beyond the observable actions. This is what I call the problem of “practical mastery” (PM). PM raises three questions: what kind of behaviour require such an explanation? What is distinctive about practical mastery? What does it consist in: a form of knowledge, or something else? I argue here that the notion of ability offers a less restrictive, though no less powerful answer to these three questions. It can offer an independent objective grasp on the subjects of attribution. I conclude that the notion is central both to account for common-sense psychology and to understand what experimental psychology actually measures and tests for. (shrink)