Visual Learning in Multisensory Environments

Topics in Cognitive Science 2 (2):217-225 (2010)
  Copy   BIBTEX

Abstract

We study the claim that multisensory environments are useful for visual learning because nonvisual percepts can be processed to produce error signals that people can use to adapt their visual systems. This hypothesis is motivated by a Bayesian network framework. The framework is useful because it ties together three observations that have appeared in the literature: (a) signals from nonvisual modalities can “teach” the visual system; (b) signals from nonvisual modalities can facilitate learning in the visual system; and (c) visual signals can become associated with (or be predicted by) signals from nonvisual modalities. Experimental data consistent with each of these observations are reviewed.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 94,698

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Multisensory Perception in Philosophy.Amber Ross & Mohan Matthen - 2021 - Multisensory Research 34 (3):219-231.
Statistical learning of social signals and its implications for the social brain hypothesis.Hjalmar K. Turesson & Asif A. Ghazanfar - 2011 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 12 (3):397-417.
The dominance of the visual.Dustin Stokes & Stephen Biggs - 2014 - In Dustin Stokes, Mohan Matthen & Stephen Biggs (eds.), Perception and Its Modalities. New York, NY: Oxford University Press.

Analytics

Added to PP
2013-12-01

Downloads
132 (#140,465)

6 months
7 (#622,277)

Historical graph of downloads
How can I increase my downloads?