Back    All discussions

2015-04-23
Retina: Miscellanious
I cannot ever hope to treat of all important issues concerning the retina, so there will always remain things to add and others to reconsider. I propose to use this thread for just that, as a container of unresolved questions that need more work. I will try to use no more than a single entry for a single issue.

What does convergence in the retina mean?
Assuming I am on the right track, and that neurons do not hide any mysterious, computational, codes, then it is obvious that converging inputs can only affect the intensity of the original input, either by enhancing it, or by reducing it. It also means that rods nor cones can contribute their spectral influence (including gray shades) on the receiving cell. After all, color sensitivity has already disappeared from view, making room for mere intensity related effects, including changes in membrane conductance.
A simple scenario would make it so that receptors, via the intermediary neuronal layers, are already wired together in spatial configurations to make their interchange fast and uncomplicated. Otherwise we would have to take into account the necessity of dynamic connections being established "while we are looking". That would demand unrealistic, or so I presume, processing speeds of retinal cells. And even if retinal cells were capable of such speeds, fast dynamic connections are, usually, a typical homunculous approach. They presuppose an even faster intelligence to determine which cells should be connected together in real time. Unless we presuppose that cells secrete identifying chemicals to make "communication" possible, but that would be nothing else but a very complicated form of pre-wiring.
I will not pursue this venue any further, as it seems not very plausible to me.
[Slow dynamic connections are on the other hand very defendable, with no need to resort to the help of a homunculus.]

Why do we see different things each time?
And I do not mean by that why our (point of) view changes depending on our mood or other circumstances. No, I mean it literally. If receptors always have the same sensitivity curve, the probability of a neuron reacting the same way in different scenes is very high, unless the lighting conditions are extremely different.
That would mean that neuron A would always react to its optimal wavelength, and otherwise to the next best available wavelength, and so would neuron B and all other neurons. Instead of the world as we know it, we would be treated to a limited number of probable scenes that had no connection at all with reality. How come that does not happen?
Add to that the fact that intermediate colors, if the distinction between them and primary or complementary colors makes even any sense for the brain, cannot be formed by making different types of receptors converge (only intensity is added), and we have created for ourselves a comfort dish for philosophers, but a nightmare for scientists.
Still, this is nothing else but a simple consequence of the fact that the spectral composition of light functions the same way as a neurotransmitter that opens the ion channels and disappears immediately, leaving us with the aftermath.

What is obvious to me is that the empirical results with additive, multiplicative or subtractive color experiments explain very little, if anything at all, about how the brain treats colors. That should not be too surprising. After all, we had already learned that mixing light (magnetic waves) gives different results from mixing pigments (matter). All we have to do now is accept the fact that there is a third way of producing colors, that of the brain. I do not know how it is done and I would not dare guess or speculate for fear of stumbling upon solutions à la Penrose. I prefer to consider it an empirical challenge that can only be answered by trained scientists, which I am most certainly not.


2015-04-23
Retina: Miscellanious
The article I would like to discuss briefly does not concern the retina, but memory in general. As such, it is linked to every part of the brain, including vision processes. 

Eric Kandel " The Molecular Biology of Memory Storage: A Dialog Between Genes and Synapses" (Nobel lecture, 2000). (The pagination refers to the pdf version.)

As always, I will skip all technical details in an attempt to get to the core of the matter as seen through a layman's eyes.

General patterns of memory formation according to Kandel:
1) Short-term memory, activated by a brief stimulus, does not require any changes to the neuron besides some internal reshuffling.
2) Long-term memory as the result of repeated stimuli results in the synthesis of a new protein and gene transcription.
3) The effect of (2) is the creation of new synapses on the target neuron.

This last effect is the most convincing element in the whole argumentation. The first two points are supposed  to describe processes that strengthen the connection between a stimulated neuron and its target. But, the descriptions Kandel gives of these processes do not differ from the processes as analyzed by Levitan&Kaczmarek (2002). They can be therefore, at least as far as I am concerned, considered as general processes that happen each time a neuron is stimulated, whether accompanied or not by memory changes.
What is also interesting is the first point. If, as Kandel and colleagues pretend, there is no radical change in the target neuron, then one can wonder how the brain can remember anything in the short-term. The explanation is that the changes following a stimulus take a little time before disappearing altogether. In this picture, the 'conformational changes", as Kandel calls them, are enough for short-term memory. So, in opposition to color sensation, we seem to have here a neural trace, maybe not of the original sensation itself, but at least of its memory. The problem is of course that this neural trace bears no resemblance whatsoever with the original sensation. But more importantly, that it is most probably the same for all, or at least indefinitely many sensations. After all, this internal reshuffling (as I call it), is not made of unique processes, but will be the same for different sensations in different parts of the body and the brain. So all this neural trace can tell us is that it is a memory of some sensation, and even that depends on our knowledge of the circumstances in which it was created. Just as with color sensation, external observers would be unable to reconstruct the original sensation that produced these effects.
The last point would appear to justify a little more optimism. The creation of a new synapse tells us that, whatever the sensation was, it was at least strong enough to trigger gene transcription and synaptic growth. Still, it is not what you would call a positive identification, and we could have said the exact opposite of this affirmation in the first case: that it was NOT strong enough to trigger those gene processes.
Last, but certainly not least, Kandel  studies the gill-withdrawal-reflex of Aplysia because his "radically reductionist approach" (p.3) demands a clearly delimited neural circuit that could be used to understand biological and molecular processes in the brain. He tells us that Aplysia has only 20,000 neurons, in comparison with the billion in human brains, and that "the simplest behaviors that can be modified by learning may involve less than 100 cells" (p.4). And that is exactly what he does, he studies those 100 cells and never asks himself what their place and connections are to the other 19,900 neurons. He speaks of behaviors of the snails, and is happy to tell us that one of his students succeeded in breding them in the lab, providing them with enough "material". 
That is, in my eyes, the biggest mistake of all, to forget that you are dealing with living organisms, and not with chemical systems. I will not mention the treatment of these animals, the euphemism of "training" is worthy of modern armies, but I could not help my indignation at the ease and innocence with which living creatures were subjected to what can only be described as torture. Kandel even speaks of behaviors and emotions ("We focused initially on one type of learning, sensitization, a form of learned fear in which a person or an experimental animal learns to respond strongly to an otherwise neutral stimulus", p.7), while at the same time denying them every attribute of life.
There is not a single question whether this "learned fear" may have played a role in the organism's behavior, and therefore influenced its neuronal responses. Kandel, and all his colleagues everywhere, find it perfectly normal to substitute the effects of these emotions with puffs of serotonine, and expect to understand how the brain works by studying these sanitized effects.
No wonder we know so little of the brain as yet.
That does not mean that the results Kandel and other researchers obtained are not important. They certainly are, and undoubtedly worth a prestigious price. Still, even though Kandel seems deeply conscious of the limitations of the reductionist approach (he advocates, at the end of his lecture, the fusion of bottom-up, molecular biology, and top-down, cognitive sciences, methods, I have the strong impression, reinforced by his more philosophical writings, that it is nothing more but necessary procrastination, until molecular biology can fully emancipate itself from these disciplines. He does not realize that the only way molecular biology could obtain its results were because scientists, each and every time, fulfilled the indispensable role of the brain as a whole, including, and in the first place, all its sensations and emotions. The  magical "5 puffs of serotonine" that were necessary and sufficient to trigger long-term memory processes are not just a handy trick to get things done. They are the chemical means through which the brain reacts to internal and external events, but they are not the brain itself. And however far they can go, scientists will always be able to find other magical puffs, and be reinforced in their conviction that they are closer than ever to a "biology of the mind". But will it be the final, ultimate puff? Or will these scientists one day understand that they are the brain they are so feverishly looking for?



2015-05-26
Retina: Miscellanious
Hyperacuity 
(Westheimer, “Hyperacuity,” in "Encyclopedia of Neuroscience", Squire, (ed.),2008). 

is also a concept that only makes sense in the context of retinal images. Objectively measurable quantities like the diameter of photoreceptors, angles of vision, acuity tests, etc, makes the discrepancy between what those objective factors allow, and the higher factual resolution of vision, to a problem that needs subtle mathematical tools for its solution. It also encourages researchers to a certain view of foveal vision that clashes somewhat with the generally agreed upon description. Whereas the textbook explanation of foveal acuity demands that to each receptor corresponds a single bipolar and ganglion cell, hyperacuity gives rise to the idea that each light point, or minimal stimulation, activates about one dozen of foveal receptors. This lack of specificity makes the second stage of the argumentation possible, in which the localization of a stimulus, and not its identification which is supposed to be already taken care of, is undertaken by unknown neural processes.
The reader will pardon such a cryptic exposé of this famous concept, but I am afraid that I cannot produce better explanations than the author himself. The article mentioned is a typical scientific mantra that only makes sense for fanatic adepts:
- the problem is formulated in objective, scientifically acceptable, terms;
- mathematical concepts and calculations are presented that conform to the description as formulated;
- unsolved problems or difficulties created by the method are designated either as themes of future research, or, preferably, as the consequence of the complexity of neuronal computations that may or may not be resolved in the (near) future.

Concerning this last point some quotes:
"Subtle processing can then tease out location parameters enabling exceedingly fine discriminations."
"The neural mechanism by which this is achieved still needs detailed elucidation."
"For single small sources, it seems clear that there is an apparatus which arrives at the position of the centroid of the retinal light distribution as it emerges after passage through the individual receptors and their retinal connections, probably by way of a population vector. The equivalent concept in statistics is the calculation of the mean of a histogram."
"A sophisticated cortical apparatus is clearly at play."

I will not try to analyze the mathematical concepts and calculations used, I have no reason to doubt of their validity when applied to a situation as described by the author. What is important is the relevance of this description: does hyperacuity refer to a real problem that needs to be solved by the brain, or is it the product of scientific imagination?

I have already expressed my doubts about the soundness of the concept of retinal image as a neurological concept. I can only reiterate my initial objections: retinal image is an object that only makes sense for an external observer. Calculations made on its basis can therefore hardly be deemed relevant for the functioning of the brain in general, and vision in particular.
The fact that there cannot be a one-to-one-correspondence between receptors and external visual elements does not mean that there can be no one-to-one-correspondence between receptors and sensations. It does mean that there is a clear distinction between the first correspondence and the second.  More precisely, a binary relation has evolved to a threesome: visual elements, receptors and sensations. In such a platonic triangle no conflict emerges and the relation can happily be described as harmonious. Hyperacuity becomes acuity simpliciter. Because it can be considered as an empirical relationship the need to assume complex neurological computations becomes redundant.

2015-05-28
Retina: Miscellanious
Retinotopy:
There seems to be an exception to the non-relevance of the retinal image for the way visual stimulation is processed. The spatial distribution of the photo-receptors on the retina has to be considered as significant. This is similar to the act of reaching or grasping an external object. The positioning of the hand has then to correspond somehow to the position of the object in space. The fact that the retinal image is upside down and mirrored in its horizontal axis has an objective reason: the way light rays penetrate an optical lens.
What is arbitrary is the retinotopic image in the brain. We could, as it were, scramble the neurons in such a way that each one of them would be assigned a random position in the Lateral Geniculate Nucleus, or in the primary visual area, without endangering visual processes or changing the way the world is experienced. Assuming, that is, that the corresponding connections are kept intact. 


2015-06-09
Retina: Miscellanious
Retinotopy (continued) 
If I am right, and only the spatial distribution of photoreceptors is significant, then the concept of Receptive Field will prove even more redundant, not to say misleading.
Right now this concept is centered on the spatial distribution of a group of neurons in relation to each other. If scrambling does not have any effect, then the sub-concepts of center and surround would lose any meaning they might still have, at least outside the retina.
Rerouting surgery like that performed by the Sur group gives me some hope that such an analysis could be one day tested empirically. (see my thread The Brain: some problematic concepts, the entry: "Where do sensations originate?" and further)

Also, a very simple explanation for the results obtained under the heading of receptive fields would be as follows:
Since there is no guarantee that we are dealing with a single photoreceptor (in fact, it is practically impossible), the On and Off responses could simply be the consequence of the light reaching the recorded cell or not.This does not explain all the results obtained, as "the absence or near absence of a response to simultaneous illumination of both regions, for example, with diffuse light".  On the other hand, such a lack of reaction is really remarkable. After all, it means that a whole group of neurons do not react to light. One wonders how we are able to look at a white patch without seeing spots immediately. Nothing in the concept or receptive field could explain such monochromatic images.







2015-06-09
Retina: Miscellanious
addendum to the previous entry:
The quote is from Hubel&Wiesel ("Receptive Fields, Binocular Interaction and Functional Architecture in the Cat's Visual Cortex", 1962), one of the first articles to mention Direction Selective cells (DS).




2015-06-09
Retina: Miscellanious
Seeing Darkness
Is being in a completely dark room the same as being (congenially) blind? Or do we "see" darkness?
Barlow would maybe opt for the second solution. After all, he considers dark colors as excitatory stimuli to Off neurons and having reverse effects to light stimuli: "on-center are centripetal white, centrifugal black; off-center are centrifugal white, centripetal black." (Barlow et al "Retinal Ganglion Cells Responding Selectively to Direction", 1964). 
I must say, that in this particular case, I tend to agree with Barlow. At least, as far as the excitatory effect of dark colors. Needless to say that I do not share his other conceptions regarding receptive fields, On-Off neurons, direction selectivity or his overall conception of the brain processes as reduction of redundancy. Which means, I'm afraid, that I have very little in common with this celebrated researcher.

Nonetheless, dark colors do not equate darkness, do they? 
What do we see when we look into the mouth of a deep cavern? And why are not all our dreams painted on a black background? Photoreceptors get hyperpolarized by light, therefore activated by light and dark colors. That does not really help us in answering the question whether we can "see" darkness. In the total absence of light there is no reason for the photoreceptors to hyperpolarize. And I suppose they do not either when we are having vivid dreams. There is still no light to activate them. 
So if we can see light without our receptors being hyperpolarized, why not say that we can see darkness?
In my first thread (Retinal image and black spot), I tried to show that we do not see whatever falls on the blind spot. We see it neither as a black hole, nor as a fill-in effect, like many would have us believe. But we do see the black nothing back in the cavern. How is that possible, unless our photoreceptors somehow produce these impressions? And everything we have learned of the brain teaches us of threshold and the minimum level a (visual) stimulation has to have before we can sense it. Or at least, before a neuron depolarizes, or a photoreceptor hyperpolarizes. Surely, there are no light rays emanating from the back of the cavern, at least not strong enough to elicit a neuronal reaction?
 
All those pseudo-metaphysical considerations lead me to only one conclusion: we must not equate the "light" we see with the physical phenomenon that bears the same name. Just as in the case of any sensation, explaining the laws under which the physical substrate has to function does not explain the sensation itself. That does not mean we can ignore these laws. We just have to be careful how we use them in our explanations. And I'm afraid the device comes without a manual.

Do we get too much visual information?
The idea that the brain is an information processing system is certainly shared by Vaney et al ("Direction Selectivity of Ganglion Cells in the Retina". 2001). That makes them conclude that "The optic nerve is effectively the information bottleneck in the visual system". I will certainly come back on this article when dealing with Direction Selectivity, so I will now limit myself to the question whether we can really say that the brain is confronted with more than it can handle.
The ratio of photoreceptors and ganglion cells aka optic fibers would certainly seem to support the view that the brain had to devise special strategies to cope with the wealth of information provided by the first, and the bandwidth limitations of its optic nerve. 
When posed as such, the problem seems very simple: get a wider optic nerve, with more processing capabilities. Such a drastic solution would have of course far reaching consequences at all levels of the brain and the body. But is it not strange that not one single organism has chosen this path? Those that did could of course already be extinct, but what about less extreme solutions? Certainly, 10 or even 5% more capacity would be possible without too much hassle? Ask Intel, and they would tell you that even such a minimal progress can be economically very advantageous.
Again, maybe Evolution already did that, and we are the final result until now. 
Or maybe it is just not a problem at all.
Our visual system has two levels: foveal with acuity, and peripheral, with a wide spatial coverage. We have reflexes that direct our attention to where it is needed, and there we make use of the foveal system for better details. Why would we need more? Certain bird predators seem to have a double fovea. I admit that I know very little about such visual systems, and their pros and cons. What I do know is that if all our visual field had the same richness in details that our fovea presents to us, our brains would most certainly be overwhelmed by the amount of information they would have to process at the same time.
With this dual system the brain does not need to worry that it is getting too much information, nor that it would be missing on essential facts. Eye reflexes and movements provide us with a very valuable and reliable detection tool.
In other words, the information that our peripheral view seems to keep hidden from us because of its poor acuity, is in fact held at the ready for us for when we might need it.
This is a perfect example of JIT management (Just In Time) companies try so desperately to implement in their businesses.
It is also the reason why analyses that are based on the assumption of a brain overwhelmed by the amount of information it is getting from the world are very often missing the point. They are so busy devising complex strategies to cope with this 'deficit", that they lose sight of the fact that neural processes are probably much simpler than they think.


2015-06-09
Retina: Miscellanious
Direction Selectivity in the Retina
How does the retina know that something is happening in its visual field? The idea of Directional Selective or DS-cells at the retina level is quite a conundrum if one thinks about it for a minute.
Visual sceneries are rarely absolutely static, and when they are, we make sure that they are not, by breathing, moving our eyes and head, if not whole parts of our body. I suppose only ninjas and reptiles can stand perfectly still.
We know that sudden changes (which motion always is) will direct our eyes, head or even whole body towards the new stimulus, and that such a reflex originate somewhere in the pretectal area ("an ill-defined region, extending between the rostral margin of the superior colliculus and the thalamus. It contains the nucleus of the optic tract..." p.206 Nieuwenhuys et al  : "The Human Central Nervous System", 2008). The role of other parts of the brain, even if not always clear, is also generally accepted. (see also the short book review by  Duke-Elder ("Visual Reflexes and Cerebral Function", 1961).
One thing is certain though, this reflex does not originate in the retina.
That means that the first image of a moving object will be brought to our attention by supra-retinal processes. And all the subsequent images as well I suppose, the smooth pursuit reflex (following a moving object with the eyes), also being a supra-retinal reflex.

Direction Selectivity at the retina level seems very implausible. Why then did renown researchers believe that it was possible?

Saccades
This is what the current generation is learning about saccades: "A saccade is generated by computing the size and direction of the saccadic vector needed to null the retinal error between the present and intended eye position". This highly scientific-sounding claim was made in a M.I.T course on vision by professor Schiller. The problem is that it is really nothing else but mumbo jumbo. Nowhere does the distinguished professor show how this computation is performed, or where. Here is the first explanation of those so-called computations:
"The discharge rate of neurons of the final common path is proportional to the angular deviation of the eye. Saccade size is a function of the duration of the high-frequency burst in these neurons".
The interpretation of such mechanistic effects as neural codes is quite daring. 
The next mention of codes and computation is just as baffling:
"The superior colliculus codes saccadic vectors whose amplitude and direction is laid out in an orderly fashion and is in register with the visual receptive field"
Which sounds very promising, until we realize that all we have been shown are phrenological connections between different parts of the brain, and which part is activated by which. Nowhere did we get an explanation of why, and more importantly, of how that was done, or in what those computations consisted of in neuronal/chemical terms. 
What do "saccadic vectors" mean? Are they a scientific concept for internal use of scientists, or do they represent something that is actually computed by the brain? And how does "laid out in an orderly fashion and is in register with the visual receptive fields" make saccades more understandable? Would we know less of saccades if the SC was not so orderly organized? After all, all that tells us is the spatial correlation of a neural process with a  brain part. It still does not explain how they do their work. Spatial correlations are numerous in the brain, and we get the impression that we are in familiar territory because the landscape looks like something we have seen before. But this familiarity is an illusion that stands real understanding in the way.

[http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-04-sensory-systems-fall-2013/lecture-videos/lec-10-the-neural-control-of-visually-guided-eye-movements-1/]

Here is an alternative explanation.
Saccades, just like the dog's scratch reflex studied by Sherrington at the start of the 20th century, only need a stimulation in their receptive field [yes, I believe in the soundness of this concept in this case!] to get activated. The resulting movements will appear random to us, the result of complex computations, unlike the scratching behavior of a dog which we have no trouble attributing to a "simple" muscular reflex.
Unluckily, the term 'receptive field' is shunned, scientists having come up with a new summoning term, "motor field", avoiding any analogy with other kinds of reflexes. Though "receptive field" is still used there where, in my view, it makes no sense at all, namely the superior colliculus (SC).
This double use of the same idea shows how dubious the concept is when used indiscriminately. After all, the same stimulus is supposed to create a 'receptive field' in the SC, and the 'force field' of a saccade. At the same time, the only way to define the receptive field in the SC is by mapping it on the retina, where it has to be considered as identical to the 'force field' of the saccade. If there are any differences between the two of them, I am not aware of them.

Retinal Locators
Another point of interest related to the previous one is the discrepancy between the vast majority of optic neurons that are connected to the cortex via the lateral geniculate nucleus (LGN), and the relatively small number that is allowed to pass through the Superior Colliculus on its way to the brain stem, tectum and other exotic parts of the brain.
Apparently, the brain does not need, or at least does not have, a spatial locator for each individual visual receptor. That could certainly explain the existence of (certain, I have no idea which) retinal interneurons. Receptors have to share a common locator, and for that, cells are needed to connect those neighboring receptors. Since we have horizontal and amacrine cells, not to mention bipolar cells maybe they divide between them the task of reduction of light intensity  (see Lateral Inhibition and Receptive Field) and localization of stimuli. An empirical matter if there ever was one.
Locator-sharing would also, at least partially, explain why we almost always need a large saccade followed by a smaller one to fixate properly on an object. There are just not enough locators to innervate ocular muscles with the degree of accuracy that would be necessary to target each retinal point individually.
Furthermore, because the point where our focus falls is unpredictable (within the group of receptors that have indirectly enervated ocular muscles), we have need of multiple eye movements to get an acceptable visual impression of an object. One that makes its identification possible. The eyes for instance are the favorite target of many predators (reptiles spraying poison) and one of the main visual attractors as far as humans (and animals) are concerned. Needless to say that there is no way the visual system could locate them each time on a first try. It must really take some searching to find them and recognize them.
Here we are confronted with a puzzling question: how do we know where to fixate on the eyes? We know there are two systems, one crude, based on peripheral vision, that brings us to the general area. And when we are there, the second system, foveal vision, makes it possible to zoom in on finer details. That is still not an explanation of how we can locate a specific visual detail in the fovea. We only have made the problem spatially more tractable. 
Since we are able, within the central area, to distinguish and focus on distinctive features, and also to go back to them unerringly, the brain must somehow keep track of where those features are in our visual field. [neural codes someone?].

There are two possibilities, not mutually exclusive.

1) We encounter the desired detail in a random fashion while our eyes roam the foveal field.
2) We already know where to look because we have encountered such visual stimuli (faces for instance), before.

In both cases, the brain needs spatial information to get where it has to be. Therefore, whatever the effects of encephalization, and the influence of different cortical parts are on the production of eye movements, we have to assume that this essential spatial information is there where it is most needed. And that would be first in the retina.

On and Off Ganglion Cells
I had expressed my doubts about the validity of a double connection of foveal cones with so-called On and Off bipolar and ganglion cells.
[see my thread Neurons, Action Potential and the Brain, the entries "On and Off Neurons" and "Cone Divergence".]

I seriously wonder now if I was not just plain wrong. I stand by my analysis that the distinction between On and Off optic fibers does not make much sense. Still, that does not mean that the anatomy was wrong.
The only way for the brain to find a specific feature in the visual field is to link it to a retinal locator. That could explain why all foveal cones have a double connection to the brain. One would be to relay the sensation, the other its location on the retina.
How this information is used by other parts of the brain is anyone's guess.



2015-06-11
Retina: Miscellanious
Gibson's "The Perception of the Visual World" (1950) is probably the book you would get if you asked a classic painter from the Renaissance to explain, in words (with occasional drawings), to his apprentice, how to create visual impressions of distance, depth, shape, etc. [See Criminisi and Stork, 2004, "Did the great masters use optical projections while painting?", where they answer this question with the negative. Apparently the classic painters relied purely on their eyesight and power of observation. In fact, the authors base their conclusion on mistakes in projection and perspective which would have been unthinkable had the artists used a projection device.]
Imagine that such a painter would also be interested in vision in general, and not only in producing works of art, then the resemblance of both books would be even bigger. As a reader you would certainly not be surprised if the grand-master considered his techniques as, somehow, explaining the inner workings of the eye. Even an abstract statement like "The illumination of a given section of surface, then, is a function of the orientation of the surface toward or away from the source of light." would not come as a big surprise. Especially if it was supplemented by more concrete observations:"illumination is not a function of the distance of the surface from the observer. The physical world gets visually denser as it recedes, but it does not get either darker or brighter" (p.94).
If, furthermore, this grand-master had been anachronistically influenced by the founders of Gestalt theory, then his affirmation that the retina is the locus not only of punctual stimulations, but also of stimuli of a more complex nature would certainly explain a lot: ""Immediate process" does not imply an innate intuition of distance; it only implies that the impression of distance may have a definable stimulus just as the so-called "sensations"", (p.69). [Here the reader may replace 'distance' by any arbitrary visual sensation, provided it is a complex one, like the sensation of depth, shape, texture, etc.] (my emphasis)
The retinal image is not a random distribution of points, but an ordered set. That demands another concept of stimulus, a so-called 'pattern stimulus" or "ordinal stimulation", (p.63). The rest of the book is then given by illustrations of different visual impressions that should be considered as such.
Such an approach means a clear demarcation from a simplistic view of the retinal image with all its ambiguities. Such an image must first be considered as an element of projective or perspective geometry, and not of traditional Euclidean geometry. Second, we must realize that there can be no point to point correspondence between the retinal image and the world, only correlations between spatial properties of objects, and their corresponding retinal projections.
The basic idea is pretty clear even if never made explicit: visual sensations are explained with some kind of geometrical pattern. Because the empirical relationships between the geometrical patterns and the corresponding sensations are very strong, the reader is given the impression that the whole process has been elucidated in a very clear and illustrative manner.

Intermezzo
Gibson and Ramachandran ("The Tell-Tale Brain", ch.2 "Seeing and Knowing",  2011)
Both writers used the same example to illustrate the impression of relief through shading. Ramachandran was apparently not aware of the fact that his idea had already been presented and analyzed, in quite different terms, almost 40 years before. He speaks of the influence of different people but does not mention Gibson himself. 
The illustration methods are accordingly disparate. While Ramachandran (p.51-55) uses computer drawings to make his point, Gibson had to be satisfied with a verbal explanation. He gives the example of a man to whom, depending on whether he is facing east, towards the sun, or away from it, some objects can appear either concave or convex (p.98 and further). 
What is interesting is the way the same phenomena is interpreted in each case.
Gibson sees it of course as an argument for his gestaltist approach: "Is it conceivable that the way things face and the way the observer faces are reciprocally interrelated even in stimulation?", (p.99).
Ramachandran chooses a very different explanation: according to him, this impression of relief that seems to depend on the position of the sun is a product of evolution. We are accustomed to the fact that the sun shines from above us: "Why such a silly assumption? [sic]...Your visual system takes a shortcut; it makes the simplifying assumption that the sun is stuck to your head..."(p.54).
I confess that I have much more sympathy for Gibson's argument even if I do not agree with it either. This kind of evolutionary arguments is the reason why Evolution theory has become the lazy intellectual's weapon. I suppose the author had a deadline to respect and needed to wrap it up fast.


2015-06-11
Retina: Miscellanious
The Archie Bunker effect in neuroscience
Jack Ziegler's cartoon "Cat thinks of a complex equation to get a ball off a table"  (New Yorker, November 26, 2001) shows a cat thinking of complex geometric patterns and equations to get to the ball on the table. Such a cartoon, which in my mind can only be understood in a satirical way, is taken literally by professor Schiller in the course mentioned above. It reminds me of the soap "All in the family" where Archie Bunker is presented as the caricature of a redneck. The intention of the writers and film makers was entirely critical. But apparently, the public saw in Archie the reinforcement of their own prejudices.


2015-06-11
Retina: Miscellanious
could a neuron not only represent a (visual) sensation, but also its location at the same time? That would be much more economical than having to use two different optic fibers. They would just have to split somewhere along the line in a spatial and visual representative. Their targets would have of course to be known in advance: genetically determined or ROM-neurons as it were. But then, the concept of retinotopy is certainly a familiar one to us by now.
The only objection that I can think of is the number of optic fibers that would have to make contact with the SC and the auxiliary optic system. I am not even sure if the number of foveal locators is not already too big for those parts. But that is something only empirical research can tell us about.
It does show that we are dealing purely with theoretical models that must stand the trials of reality.

2015-06-22
Retina: Miscellanious
Sensations and memories of sensations: a metaphysical necessity
I realized that I was still trying to find purely physical (read, neurological or chemical) explanations for the memory phenomenon. Even though all my (theoretical) results pointed to an inevitable conclusion: there is no color (code) in the brain.
Furthermore, the only chemicals that are known to react to color (the so called pigments of the photoreceptors), disappear from sight as soon as they have done their job. 
There are very few possibilities, all based on the idea that each neuron is what I called a rom-neuron, genetically predetermined (at least as far as its point of origin, and maybe its putative, and, until now, highly hypothetical endpoint).

1) Each optic fiber relays all possible colors.
2) Every fiber relays a specific color (or at least a limited number).

I still think that the first alternative is more tractable and easier for us to understand as for its logic.

Then we have a common possibility to both alternatives:

3) Colors can be mixed. Our visual impressions depend not only on the individual sensitivity of photoreceptors, but also on on their combination.

This hypothetical rule, which sounds very plausible when we look at how colors are mixed chemically and electronically (televisions and cameras) is also a computational nightmare.
I challenge anyone to give a biologically plausible explanation of such a process without reverting to computational obscurantism.
Here is another form of obscurantism: if we all have to admit that physical stimuli produce sensations, why not accept the same from their physical records in the brain?
The only mysteries we have to keep, are the ones we cannot avoid: sensation and its locus.
If we assume that the same sensation, when recorded in the brain, still has the same, or rather a related effect, we are not worse off than we were before. 

Only the irrational need to ignore the fact of sensation can explain the same irrational need to find a physical or chemical answer to the problem of memories of sensations while we cannot even explain the sensations that caused them in the first place.

By accepting the irrefutable fact of sensation [Like Hagrid would say, "I should not have said that", now every professional philosopher will feel obligated to explain why stimuli without sensations are metaphysically possible!] we remain consequent by accepting that memories also create their own form of sensations.
Otherwise, we will perpetuate the schizophrenic attitude of neuroscience that tries at every turn to pretend that the emperor's clothes are truly magnificent!
[A monkey brain has a maximum of approximately 15bln neurons. Modern GPU's  anno 2015 are already at the half of that amount (in transistors) and counting. GPU's are based on the mixing principle. That shows the complexity that we can expect of a biological system based on the same principle. It is certainly not "metaphysically impossible", but I personally would not know where to begin explaining it.]

2015-06-22
Retina: Miscellanious
Do we need an auto-focus mechanism?
Modern cameras make it ever easier for the user to concentrate on the perfect shot. I always had problems with getting crystal clear pictures because of some defect in my vision that luckily was not noticeable anywhere else. The advent of auto-focus came too late for me, I had already given up photography as a hobby years before! Auto-focus was certainly a technical wonder, and it does remind us of the same ability of our eyes to focus immediately on different objects and distances. But how much alike are those two mechanisms?
Imagine all the computations the brain would need to do if it were a modern camera. Would it not be simpler if our eyes used a low-tech process like the old point and shoot devices? One stand for outdoor, one for indoors or cloudy days, and you are good to go!
Still, the quality of our vision resembles that of the last generations of cameras, more than it does resemble that of the old silver boxes.
Nonetheless, I am convinced that the focus mechanism of biological vision would be considered very low-tech.
Imagine that each point in the retina had its own setting understood as the amount of contracting or relaxing of different eye muscles
We would still have a problem with different distances. Our foveal image can be filled with very close or the opposite, very far objects.
How does the brain know when to switch from one setting to the other? We all know the principles of focal point, distances and the necessity of moving lenses nearer to or farther from the light sensitive surface (the film or retina). To paraphrase a famous book, I would say "The Brain Does not Work That Way".
See, I cannot believe that a mechanism exists for the brain to decide when an image is on focus. That would be assuming that the brain somehow can make the difference between what an image should look like, and what it does look like. If that where the case, people with poor vision would permanently be trying to get a better focus and their eye muscles would know no rest. As an almost senior citizen [yes, I look much younger, thank you very much!] I very often wish for a longer arm to be able to read clearly some labels in the supermarket. But I move my arms, or maybe my head, to get a clearer picture, while my eye muscles just sit there and laugh at me!
Sure, I already hear you say, that is an age thing, or a birth defect that makes your eye muscles  not react properly. That is of course undeniable. But how does my brain know that? How can it make the distinction between a natural state (the image is simply out of focus), and a consequence of some deficiency? Assuming that it it just does, is also just not good enough.
Still, what if it was indeed simply a matter of eye muscles not working properly? Which would mean that when they do images are always in focus.
We are obviously back to where we were before. What about different focal distances?
What makes an eye auto-focus on a near object as well as on a distant one?
Maybe we need to go back to the basics. How do we go from a near object to a distant one? The most obvious way is that the distant object somehow attracted our attention and we directed our fovea on it. It impinged first on the peripheral retina.
Or the other way around. We were looking in the distance when something near us attracted our attention.
We had already established, or at least mentioned, the fact that we can direct our attention to objects because we remember where they are relative to our current position. Which means that memory in those cases plays the role of a visual stimulus.
If we accept the idea that a saccade is in fact a muscular reflex, where a certain stand of our eye muscles corresponds to each spatial position in our current field of vision, then we would have spared our poor brain a very big headache! 
We would have solved the distance puzzle. To each current field of vision would correspond certain muscle positions, and from there we could go to any other field of vision.
It would be like grabbing the phone from the  table and bringing it to your ear, or interrupting your gesture which meant to put it back on the table because the conversation had ended, and bringing it back to your ear to answer the next call. Or standing up from a sitting position on the ground, as opposed to a lying position. You do not need to think about the position you are now in before you can move to the next one. If there  was ever a need of "programs in the brain", those would be perfect examples!
In other words, our eyes do not need an auto-focus mechanism, for the simple reason that they are either always already in focus, or they are not, for whatever reason.

2015-06-22
Retina: Miscellanious
How are saccades possible? 
We all (think we) know that a change in the visual field evokes a saccade to the locus of change. But how is this change perceived at all? What makes the reflex kick into action?
It would be too simple to presume that a change, whatever its nature, is immediately noticed. In fact, experiments on Change Blindness (started in 1976 with McConkie and Rayner "Identifying the span of the effective stimulus in reading: Literature review and theories of reading. In H. Singer&R.B. Ruddell (Eds), Theoretical models and processes of reading") have shown that when changes appear during a saccade or a flicker, the subjects do not notice them at all. [Such a typical experiment involved changing the text on a computer screen, text on which the subject was working. It was mentioned by Dennett in his famous "Consciousness Explained".]
What Change Blindness maybe teaches us is that we are not constantly updating the visual scene each time we look. In fact, it seems like we are indeed looking at the memory of a visual scene, rather than at the scene itself! (See above.) As long as no change has been detected, the retinal information is apparently discarded.
This is somehow quite understandable. The amount of stimuli that reaches the retina is infinite and there is no way for the brain to compare each time what it has in its memory with what is out there. [Try creating an algorithm that would decide of the identity or non-identity of two phenomena without a homunculus looking over your shoulders!] So our brain does not even try. It justs records any changes it does detect.
Where and when is the "decision" to discard retinal stimulation taken? And How? 
Let us go back for now to the question of how we detect change in general, that is, how saccades are originated.
Wherever that happens, in the auxiliary optic system, the cerebellum or elsewhere in the brain, a change must be detected to trigger a saccade.
The problem is obviously not a logical one. [Any luck yet with the algorithm?]
Let us then look at it from a non-physical and non-chemical perspective. Let us assume that change, just like motion, is in fact a sensation. It would still need a neural substrate of some sort, wouldn't it?
This substrate, just like motion, would be a (mildly) complex pattern of two different [see what I mean?] states, A and B. While a visual neuron is in state A, nothing happens (that is relevant to our problem we hope). As soon as it gets in state B, hell breaks loose.
Such a beautifully logical pattern would be highly plausible if visual neurons were binary transistors, but certainly not if each neuron can convey any possible visual sensation.
But why could we not consider it as a binary pattern? We would have A and NOT-A. We do not need to worry about the specific nature of the new stimulus, only acknowledge the fact that it is different from A.
Such a binary detector would certainly make sense. If we can find a place for it in the brain where it can fulfill its function. 

Could we use retinal locators instead? 
1) A muscle only needs to react when it is "pinched". After its reaction it relaxes again, waiting for the next stimulus.
2) Saccades can be automatic, or they can be triggered by visual events. The same muscles may be used for one or the other, but the "instructions" to move can come from a different "place".
3) Retinal receptors are continuously being activated collectively. How come our eyes are not continuously moving everywhere? Wait! They are, aren't they?
4) There is also the question of the same stimulus impinging on the same location on the retina and its disappearance from experience. Eye movements allow the receptors pigments to be refreshed and reinstate their sensitivity.
5) What if this did not happen for retinal locators? The same stimulus is no longer felt, because eye movements do not reset the photoreceptor, and it takes a new stimulus to "pinch" the muscle? We would of course still need an empirical explanation of what makes such a reaction possible. 
[Such a neuron could be called a Change-neuron! It would be comparable to Direction Selective cells (DS). There does not seem to be an end to possible types of cells or neurons in the brain! In view of the Archie Bunker effect, let me confirm the sarcastic character of such a comment. See above]
6) Also, we still need an explanation for the putative fact of blocked retinal stimulations unless there has been a noticeable change.

The last 2 points seem evidently related.
The advantage of (5) is that we do not need an impossible computation to acknowledge change. Though it still remains a mystery. What we need is a chemical reaction to change that could act as a trigger.
Take one of the attractions in a fair where machos are allowed to show off their strength by hitting on a plate with their hands or with a hammer. The harder they hit, the higher a ball gets up in the shaft. Imagine an electrical trigger that would keep giving the same impulses at regular and fast intervals. It is conceivable that the ball would appear to be immobilized in midair while it is in fact constantly fighting gravity. 
We would certainly not accuse the ball or the electrical trigger of using computations instead of natural forces! Likewise, there is no reason to invoke the existence of any computational process in the brain. 
The same stimulus keeps the instruction to the muscle locked in the "on" (or "off") position. The muscle contracts and relaxes before reacting to other instructions. It will only be able to react to the old instruction once the latter has been reset. It will still be able to go back and forth to and from the same location because of other instructions.
I have no idea which chemical and neural processes could account for such a phenomenon, and will therefore leave it to more knowledgeable researchers to work it out. It is at least a possibility.

What remains is the fact that the same process can explain both
a) how unchanged visual sensations are blocked from memory,
b) how a change in visual sensations can function not only as an attractor, but also as a muscular trigger.

The question whether such a phenomenon is limited to retinal locators or is valid for all photoreceptors remains open.


2015-07-27
Retina: Miscellanious
Saccadic Suppression revisited

Change blindness occurs not only during eye movements, but during fixation. We would be looking straight at the change and not see it because it happened when we were not looking, and we are still relying on our memory.
If we always "see" the content of our memory and not the world itself, our vision does not need to be suppressed at any moment. A change attracts our attention, we fixate on it and record it in memory. And that is the moment we 'see it". 

[Whether one happens before the other or simultaneously does not seem to be really important in this context. Anyway, that is a question I would not know how to answer: do we record it first and then see it, or vice versa? The first alternative would be more in line with the idea that we see what we remember, but maybe it does not apply to the first time a change is perceived.]

What we therefore need to explain is not a, probably, inexistent phenomenon (vision suppression during eye movement), but a real phenomenon: change blindness, or rather its positive correlate, change perception.

Mach has taught us that we can only be conscious of acceleration and not of motion itself. Maybe the same rule applies to vision: we can only see change.
That would also mean that we only record at moments of fixation. 
That is why we can see the blur in movies, and not in real life. The first one is an objective phenomenon, the result of chemical (emulsion sensitivity), or mechanical processes (shutter speed). Whatever the reasons, it really exists independently from our perception processes. 
A blur in real life would mean that objects move faster than light, which physicists consider impossible, certainly for everyday processes. Such a blur can therefore only be produced by our own perception. When it is somehow malfunctioning.




2015-09-14
Retina: Miscellanious
Retinal Locators?
They do sound like all the mathematical devices that I have been criticizing others about, even though they are supposed to be muscular links instead of mathematical constructs. After having studied Hearing I have doubts again concerning assigning specific neurons for the localization function. Let me just reconfirm the decision of keeping this issue open. It apparently needs more reflexion, and even better, more physiological and neurological research.



2016-04-26
Retina: Miscellanious
Seeing Darkness (2)
http://philpapers.org/post/10053

Imagine a thin luminous circle on a pitch black background. Let us us assume for argument's sake that the background does not reflect any light, or at least not enough to excite the receptors.
We will obviously see the circle, and therefore also the black parts of it. In other words, the circle will not turn into a straight or even sinuous line, and we will be able to see the diameter of the circle.
We also know that receptors are always active, even in pitch dark. It would therefore seem that the dark parts of the circle we are seeing form a real perception: we are seeing the darkness.
There is a very easy way to test such a theory. patients with glaucomas lack active photo receptors in certain areas of their retina. Think about tunnel vision. What we need is a clear phenomenological study of the visual experiences of these patients, instead of intellectual reconstructions. Do the patients really experience light/images at the end of a tunnel, like someone with a normal vision would? Or do they see these images just like we see a whole scene?
Another example would be two weak lights separate from each other, say at different corners of a pitch back room. We see the (black) distance between those lights. If my analysis is correct, someone lacking photoreceptors between those lights should see them as one.
I wish people with such handicaps would react to these remarks. It would help cut on the speculations from my part.


2016-12-27
Retina: Miscellanious
Seeing Darkness (3)
[see
mirror mirror on the wall
mirror mirror on the wall (2)
mirror mirror on the wall (3)

]