From PhilPapers forum Cognitive Sciences:

2015-06-09
Retina: Miscellanious
Seeing Darkness
Is being in a completely dark room the same as being (congenially) blind? Or do we "see" darkness?
Barlow would maybe opt for the second solution. After all, he considers dark colors as excitatory stimuli to Off neurons and having reverse effects to light stimuli: "on-center are centripetal white, centrifugal black; off-center are centrifugal white, centripetal black." (Barlow et al "Retinal Ganglion Cells Responding Selectively to Direction", 1964). 
I must say, that in this particular case, I tend to agree with Barlow. At least, as far as the excitatory effect of dark colors. Needless to say that I do not share his other conceptions regarding receptive fields, On-Off neurons, direction selectivity or his overall conception of the brain processes as reduction of redundancy. Which means, I'm afraid, that I have very little in common with this celebrated researcher.

Nonetheless, dark colors do not equate darkness, do they? 
What do we see when we look into the mouth of a deep cavern? And why are not all our dreams painted on a black background? Photoreceptors get hyperpolarized by light, therefore activated by light and dark colors. That does not really help us in answering the question whether we can "see" darkness. In the total absence of light there is no reason for the photoreceptors to hyperpolarize. And I suppose they do not either when we are having vivid dreams. There is still no light to activate them. 
So if we can see light without our receptors being hyperpolarized, why not say that we can see darkness?
In my first thread (Retinal image and black spot), I tried to show that we do not see whatever falls on the blind spot. We see it neither as a black hole, nor as a fill-in effect, like many would have us believe. But we do see the black nothing back in the cavern. How is that possible, unless our photoreceptors somehow produce these impressions? And everything we have learned of the brain teaches us of threshold and the minimum level a (visual) stimulation has to have before we can sense it. Or at least, before a neuron depolarizes, or a photoreceptor hyperpolarizes. Surely, there are no light rays emanating from the back of the cavern, at least not strong enough to elicit a neuronal reaction?
 
All those pseudo-metaphysical considerations lead me to only one conclusion: we must not equate the "light" we see with the physical phenomenon that bears the same name. Just as in the case of any sensation, explaining the laws under which the physical substrate has to function does not explain the sensation itself. That does not mean we can ignore these laws. We just have to be careful how we use them in our explanations. And I'm afraid the device comes without a manual.

Do we get too much visual information?
The idea that the brain is an information processing system is certainly shared by Vaney et al ("Direction Selectivity of Ganglion Cells in the Retina". 2001). That makes them conclude that "The optic nerve is effectively the information bottleneck in the visual system". I will certainly come back on this article when dealing with Direction Selectivity, so I will now limit myself to the question whether we can really say that the brain is confronted with more than it can handle.
The ratio of photoreceptors and ganglion cells aka optic fibers would certainly seem to support the view that the brain had to devise special strategies to cope with the wealth of information provided by the first, and the bandwidth limitations of its optic nerve. 
When posed as such, the problem seems very simple: get a wider optic nerve, with more processing capabilities. Such a drastic solution would have of course far reaching consequences at all levels of the brain and the body. But is it not strange that not one single organism has chosen this path? Those that did could of course already be extinct, but what about less extreme solutions? Certainly, 10 or even 5% more capacity would be possible without too much hassle? Ask Intel, and they would tell you that even such a minimal progress can be economically very advantageous.
Again, maybe Evolution already did that, and we are the final result until now. 
Or maybe it is just not a problem at all.
Our visual system has two levels: foveal with acuity, and peripheral, with a wide spatial coverage. We have reflexes that direct our attention to where it is needed, and there we make use of the foveal system for better details. Why would we need more? Certain bird predators seem to have a double fovea. I admit that I know very little about such visual systems, and their pros and cons. What I do know is that if all our visual field had the same richness in details that our fovea presents to us, our brains would most certainly be overwhelmed by the amount of information they would have to process at the same time.
With this dual system the brain does not need to worry that it is getting too much information, nor that it would be missing on essential facts. Eye reflexes and movements provide us with a very valuable and reliable detection tool.
In other words, the information that our peripheral view seems to keep hidden from us because of its poor acuity, is in fact held at the ready for us for when we might need it.
This is a perfect example of JIT management (Just In Time) companies try so desperately to implement in their businesses.
It is also the reason why analyses that are based on the assumption of a brain overwhelmed by the amount of information it is getting from the world are very often missing the point. They are so busy devising complex strategies to cope with this 'deficit", that they lose sight of the fact that neural processes are probably much simpler than they think.


View thread | View forum