From PhilPapers forum Philosophy of Cognitive Science:

2016-07-20
Eye movements
This thread has been abusively deleted. The Philpapers Team offered me the opportunity to restore it.

"How many threads do you need to restore? Combining multiple posts into one would be a way to get around the limitation on 2 posts, and would also be less work for you. Since they were previously accepted, we'll make sure to accept them if you notify us ahead of time with the subject heading." The PhilPapers Team

]


8 Efferent nerves to the inner ear: a First Approach

Some references:

- Rasmussen "The olivary peduncle and other fiber projections of the superior olivary complex", 1946 ; 
- Rasmussen et al "Neural Mechanisms of the Auditory and Vestibular Systems", 1959/2011;
- Warr "Olivocochlear and vestibulocochlear efferent neurons of the feline brain stem: their location, morphology and number determined by retrograde axonal transport and acetylcholinesterase histochemistry", 1975 ; 
 Brown "Morphology of labeled efferent fibers in the guinea pig cochlea", 1987 ; 
- Wolff  "Efferente Aktivatät in den Statonerven einiger Landpulmonaten (Gastropoda)", 1970 ;
- Ryugo et al "Auditory and Vestibular Efferents", 2011
- Goldberg et al " The Vestibular System: A Sixth Sense", 2012.

The idea that the brain could influence how its sensory receptors work is very difficult to accept. We can understand that organisms can choose what to direct their attention on and what not, but just as we cannot not hear, we cannot not see or not feel something touching our skin.
Nonetheless, it is this principle which has been put in doubt since many decades, even if it seems to have had no practical consequences at all. At least, I do not know of any article that goes beyond establishing this "fact", accompanied by some platitudes about why Evolution saw it fit to develop such a mechanism. But how such efferent neurons could function is anybody's guess.

This seems like a perfect place for George to build a nest [okay, home, have it your way]. Stimulation of the hair-cells in the semi-circular canals and otholitic organs is supposed to be mechanical. Head movements result in fluid movement that finaly hyper- or de-polarizes the receptors.
I could understand the logic behind efferent neurons stimulating the afferent cells of the receptors. Modulation of the effects of the mechanical stimulation could,  with great difficulty I might add, be seen as an evolutionary precaution whose usefullness still has to be established.
But what could efferent neurons possibly have for effect on the receptor-cells themselves? The hair-cells represent translational and gravitational on one side, rotational effects on the other side. What would the efferents neurons represent? What would the modulation of these effects look like? Less rotation or less translation? Less gravity?!

The idea of sensory modulation is not as strange as it might appear in the first place. We can partially close our eye lids, or squint, to regulate the amount of light impinging on our retina. We sweat or shiver to counter the effects of temperature. The first type is purely voluntary while the second would be considered more as a biological reflex on which we have no direct influence.
Why not a modulation of cochlear and vestibular stimulations? In the case of auditive stimulation we can easily find all kind of logical arguments in favor of such a mechanism. A better use of attentional processes would be high on the list.
I honestly can think of no advantage of the modulation of vestibular stimulations, or at least, those that I can think of seem to be the wrong ones. It would be very advantageous for an organism to counter the effects of involuntary rotation. Falling in a swollen river and being tossed in all directions by the water is a situation where a'cool" head could mean the difference between life and death. And what about being shaken violently by a predator and slipping from its jaws? Would it not be a blessing if the prey could run away as soon as it fell on the ground, instead of feeling dizzy and disoriented?
How about going to the fare with your grandchildren and getting on one of those horrible rotating devices which they love so much? I would immediately sign for such a bio-enhancement!
But alas! It is not to be. Whatever these efferent neurons are doing to hair-cells, it is nothing so obviously useful.

Vestibular modulation would only make sense in situations where the brain can anticipate vestibular stimulation. Not only that, the brain would need to know in advance what kind of stimulation it will be getting: is it a simple head movement, or a stuntman salto?
Otherwise, the brain might be modulating stimulations that would be best kept unaldurated for the right reactions to follow.
But such a distinction seems very unlikely. If it were possible, nobody would ever get sea sick!
The fact that we finally can learn to cope with the boat movements could mean that the modulation process is itself a learning process that takes time to settle in and show its fruits. 
But modulating the way we experience gravity and movements would also have consequences on the way we run, walk or stand. The body needs those sensations to be authentic at all times. It would be like the brain deciding to modulate vision and reduce our sensitivity to blue in the summer because of the cloudless sky. Maybe it does just that, but certainly not by changing how visual cells react to different colors at the retina level!

9 saccadic suppression revisited

Change blindness occurs not only during eye movements, but during fixation. We would be looking straight at the change and not see it because it happened when we were not looking, and we are still relying on our memory.
If we always "see" the content of our memory and not the world itself, our vision does not need to be suppressed at any moment. A change attracts our attention, we fixate on it and record it in memory. And that is the moment we 'see it". 

[whether one happens before the other or simultaneously does not seem to be really important in this context. Anyway, that is a question I would not know how to answer: do we record it first and then see it, or vice versa? The first alternative would be more in line with the idea that we see what we remember, but maybe it does not apply to the first time a change is perceived.]

What we therefore need to explain is not a, probably, inexistent phenomenon (vision suppression during eye movement), but a real phenomenon: change blindness, or rather its positive correlate, change perception.

Mach has taught us that we can only be conscious of acceleration and not of motion itself. Maybe the same rule applies to vision: we can only see change.

That is why we can see the blur in movies, and not in real life. The first one is an objective phenomenon, the result of chemical (emulsion sensitivity), or mechanical processes (shutter speed). Whatever the reasons, it really exists independently from our perception processes. A blur in real life would mean that objects move faster than light, which physicists consider impossible, certainly for everyday processes. Such a blur can therefore only be produced by our own perception.

10 What is VOR? And does it really exist?
The relationship between eye and head movements is always considered in a so-called occulo-vestibular context. 

head--> stimulation of inner ear ---> vision constraints--> eye.

As you can see, the relationship is a very complicated one. We are very far from the doll'eyes paradigm which assumes a simple linear relationship between both organs. 
This is by the way what an online textbook from Dartmouth Medical School has to say on this subject:
"If the patient is awake [the brain] keeps the eyes from deviating from midposition and actually may drive the eyes beyond the midposition toward the direction of turning. If the patient is in a coma due to bilateral hemispheric suppression [...] the eyes deviate away from the direction of head rotation in an unchecked manner (the reflex response is not inhibited by cerebral cortical input)." (Reeves and Swenson "DISORDERS OF THE NERVOUS SYSTEM". https://www.dartmouth.edu/~dons/index.html. Ch.6]
So, if there ever was a doubt as to the link between head and eye movements, the reaction of coma patients should take it away. What such examples do show is that, just like in the case of Mach's experiments, visual constraints play no role whatsoever in this model. 

We have though, instead of
 
head movement --> stimulation of inner ear---> NO eye movement. (Mach)

[A case of the so-called VOR cancellation. Remark that such a concept turns VOR into something like the Freudian concept of "resistance". It can never be falsified. Whatever the patient says the therapist does not like can be interpreted this way, and in all impunity. The same way, whenever a phenomenon does not seem to support the existence of VOR, just put it under the heading of VOR-cancelation, and you are good to go.] 

head movement --> stimulation of inner ear---> eye movement. (VOR)

It seems impossible to make the line any shorter since any head movement would have a stimulation of the inner ear as a consequence. 
A very hard proof, for those, like me, who doubt this trinity, is the so-called  Barany's caloric test for which he received the Nobel prize in 1914.
In fact, this test is considered so reliable that it is one of the tests used to establish brain death!
Its simplicity is equaled only by its beauty [or the other way around.]: pour some water, warmer or cooler than the body temperature, in one of the inner ears, and a nystagmus will be produced, its direction depending on whether it was cold or warm water.
(Following the COWS mnemonic: Cold Opposite Warm Same)

Remark that we did shorten the line which has now become:

stimulation of inner ear --> eye movement

And that seems like a definite proof that vestibular sensations and eye movements are causally related.

But here is "le hic", as the French would say. People undergoing the caloric test are subject to vertigo. And the problem with vertigo, or dizziness, is that it is, usually, not a very nice sensation. It is somehow comparable to pain.

Metaphysical Digression

The withdrawal reflex is said to come into action before we even have the time to feel pain. It is therefore purely mechanical. How about jumping up and down and bringing your hurt finger to your mouth? What is the neuronal cause of this behavior? You could look for pain in the brain as long as you want to, you will not find it.
Somebody armed with science-fiction technology that would allow him to follow the excitation of each individual nerve in situ, that is with full consideration of the context in which it is happening, would probably see two series of events follow each other, but with no apparent link between them.

action one - gap - action two

This paradigm would be a very clear representation of causal efficacy of a non-physical event.
It is indicative for the Zeitgeist that a concept like emergence has been accepted so easily because it could hitch a ride with modern concepts like "computer", "neuron" and "complexity" that could hide its metaphysical roots, while the concepts of spontaneous generation or mental causality are ridiculed.

action one - emergence of a link - action two
action one - spontaneous generation of a link - action two
action one - sensation/emotion - action two

I must say that of all three realizations of the same paradigm, the last one has at least the plausibility of psychological phenomena.

We are worried that the acceptance of causal efficacy for mental events would open the dam wide open to all kinds of mystical pretensions, which would of course certainly happen. But do not think for a second that the mystics would then get a free lunch.
The physical realm cannot solve the halting problem, it needs for that the mental realm. But does that mean that the mental realm is "computationally" or "logically" complete? [Whatever those terms may mean in that realm.]

Or will the mental realm also need the physical realm to be effective?

The previous paradigm would then become:

Sensation/Emotion - gap - Sensation/Emotion

And what could fill in the gap better than the physical realm? We would then have:

Sensation/Emotion - physical action - Sensation/Emotion

In other words, you cannot go indefinitely from sensation to sensation, you need the in-between stops in the physical realm to access other sensations. [we can go from pain to relief only if the effects of pain are taken away. And that is a physical process. Without that, we would probably witness something akin to inertia: the pain would never stop, just like an object put in motion can never stop out of itself!]

We would end up with what we already have, a physical and a mental realm. No harm done.

What makes the gap nigh invisible is its familiar nature. We see no gap between the behavior of someone burning his finger, withdrawing his hand, and jumping up and down. Why would we see it in the neural version of this series of events?
It would take a very thorough knowledge and mapping of the brain to discover the gap. Without this scientific certainty, that may never be attained, that two successive events are not causally related, we will always fill in the gap ourselves.
Of course, such speculative considerations may turn out to be superfluous.

A Simpler Explanation is maybe given by 
Baloh and Honrubia's "Clinical Neurophysiology of the Vestibular System", 2010:
"Presumably, the spontaneous afferent nerve activity increases and decreases because of heating and cooling of the afferent nerve, respectively." (P.179)
Which means in fact that the caloric test is nothing else but a surrogate vestibular stimulation. 
We are therefore back to the previous situation:

head movement --> stimulation of inner ear---> eye movement
or rather:
caloric stimulation --> stimulation of inner ear---> eye movement

The metaphysical digression was after all maybe not so superfluous?
Let us if we can find more down to earth arguments.

11 Are eye movements and vestibular stimulation causally related?
Not according to Mach'experiments, and certainly not according to our very own everyday experiences. We do not get a doll's eyes reflex when going up or down in an elevator, and air pilots seem to be very well capable or repressing any occulo-vestibular reflexes they may have. This was known even before World War II as vestibular habituation: (Griffith "The organic effects of repeated bodily rotation", 1920; Dodge "Habituation to rotation", 1923).
Furthermore, there is no unequivocal proof of a direct link between the inner ear afferents and the extra-ocular motoneurons, the topological concept of vestibular nuclei notwithstanding. All evidence is circumstantial and could be very easily interpreted differently. Which I certainly will.
All in All, there is no reason to blindly accept the existence of such a link.
In fact, doing away with VOR might turn out to be very beneficial for further research. After all, more than a century of experiments after Mach have still not given any final results that could satisfy everybody. Researchers are still playing with Barany chairs and arguing about meaningless numbers and other quantitative models. And it does not look like they will stop any time soon!

In his Nobel Lecture (1914) Barany spoke of different tests that showed the links of the inner ear neurons not only with the extra-ocular muscles, but also, via the cerebellum, with practically all body muscles.
The effect on vestibular malfunctions on balance and posture were already known, Flouren's predilection for methodical ablations of different layers of neurons in the cerebellum and elsewhere having given convincing results which had been confirmed by others.
[Flourens "Recherches Experimentales sur les Proprietes et les Fonctions du systeme nerveux dans les Animaux Vertebres", 1842.]

What I find particularly illuminating in the examples given is the complexity of the connections between vestibular neurons  and other parts of the nervous system.
Let us take the simple example of telling the patient to hold his right arm  straight and then "syringe" his right ear with cold or warm water. The arm, inexorably, starts deviating from its position.
The deviation is even more noticeable when the patient is asked to touch with his own fingers those of the doctor. His fingers will deviate to the right or the left, opposite to the nystagmus that had been caused by the caloric stimulation.

Here is "anozer 'ic":
The same stimulation can apparently have an effect on many muscles whatever their initial position. Barany speaks of the different joints at the wrist, elbow, hip [no George, not 'ip, hip. We must not abuse of a good thing!], etc.
That is a lot of branching, which must also involve a lot of logic to make sure that when the same vestibular neurons are activated, the right muscles, according to the situation and context, are innervated .[Shall it be the eye AND the cerebellum, or the eye alone? Would you like a gift wrapping?]

The conception of a direct connection between stimulation of the inner ear and so many different muscles is untenable. Other, more complex, neural mechanisms must be involved. And if that is the case, how can we still speak of "reflexes"?

12 Retinal Image and its Movements

I must admit that I have great difficulties at times following the argumentation of different authors, starting from the pioneers of the 19th century. This is my problem:
A retinal image can be compared to a drawing on the inside of a globe. however the globes moves, the image will move with it, whatever rotation the globe does, the image will rotate ccordingly... In space... If we make abstraction of the globe.
What about torsional movements. Can the retinal image turn around its own axis if the globe does the same thing? Again, if we make abstraction of the globe, we can only conclude that the image will have rotated about its own axis: it will have performed a torsional movement.
Here is the problem as I see it: the globe, that is the eye, can be said to rotate and even translate in space... If we make abstraction of the head! Otherwise, we can say that the eye can go left-right and up-down. Even the so called rolling of one's eyes is in fact a combination of the previous movements.
Listing's plane is the mathematization of this intuition. We cannot move or turn our eyes in 3D space, only in a 2D plane. 
Let us use the ancient Greek conception of the eyes being the origin of the light we direct on external objects to see them: the electric torch paradigm.
Do we have to imagine the torch fastened to a system of horizontal and vertical rails on a wall (Listing's plane), or a torch fastened to a mobile wall that can be moved up and down, but also be tilted forward or backward?
What about tilted to the left or right side? Can the eye do that?

I suppose that all tilted positions can be attributed to the eye in a stationary head, while all translations can be seen as the result of the head and or the body moving. [Helmholz about the way the eye is fixed in its orbit: "any displacement of the eyeball as a whole, that is, any displacement in which every point of the eyeball is moved in the same direction, is rendered impossible." Treatise, vol.3. par.27]
The problem is that the eye cannot be tilted from its place. It can only be rotated. So, instead of a wall, maybe a hand holding a torch would bring us closer to how the eye behaves.
Do we need different concepts for the eye movements in this second case. Can we say really that the eyes move from left to right or up and down, just because the head itself is moving?

In the after-image paradigm, what is calculated, the retinal image, or the visual sensation as after-image?
The retinal image can be said to perform any movement, and not only torsion, only in an abstract, geometrical way. As part of the eye globe it just follows passively the movements of the latter, while in fact remaining stationary.
What is calculated is not the position in space of an external, observable object, but that of an internal, private visual sensation.
We are somehow back to the Weberian paradigm, aren't we?
That certainly seems to be the case, at least until the calculations are suddenly transferred to another dimension.
The movements of the after image are not used to analyze how your sensations behave in different circumstances, but to justify calculations where they are completely ignored and neutralized. Optical laws in the modern version do not need to mention visual sensations at all. Even the use of after-images has made place for the neutral calculation of eye movements with magnetic and electrical devices. 

*****

13 Efferent nerves to the inner ear: a possible explanation
The difference between vestibular sensations and sensory organs is that the first is oriented at signals from within the body, while the others warn the body about any external intrusion. 
I was sitting in my favorite, old but comfortable swivel chair which was, once again, leaning a bit to the side. Time to re-fasten the feet after shoving them back in the right position. But then I started to think about this unremarkable fact: I could feel that the chair was misaligned, and I could feel it everywhere but in my head. And, after all, why should i feel it in my head? My head was straight on my shoulders, it was the rest of my body that felt somewhat askew.
Then I understood, or thought I did, why the inner ear would need efferents from other parts of the body. It has to know when it is out of balance even if the head is straight, and therefore silent.
My comparison with vision was the wrong one. In this case, it seems that the brain was entitled to change the sensibility to blue (read balance) if blue was too dominant. And that had to happen at the source, in the inner ear. It would be indeed like changing the sensibility of the retina to a certain color, but then, in this case, the retina is directed directly and only to the own body. And the inner ear needs authentic sensations from every part of our body.
How does it work exactly? I have no idea.

*****

14 Binocular vision and its myths
Wheatstone wrote his famous paper "On Physiological vision" in 1838, one year before Daguerre published his results on the ancester of photography and cinema. The steroscope Wheatstone described must have seemed as a wondrous instrument, capable of bringing to life dead objects! It also created quite a commotion among physiologists, especially in Germany, where the discipline was approaching its peak under giants like Messner, Mueller and others, all soon to be eclipsed by Helmholz and Hering.
At that time, the Theory of the Identity of Retinal Images was predominant. One assumed that the only way binocular single vision could be attained was by the fusion of two similar images falling on corresponding locations in both retinas. Albrecht Nagel ["Das Sehen mit zwei Augen", 1861; not to be confused with the American batman], quotes Volkmann who predicted the fall of the discipline and the repudiation of all the calculations made possible by Mueller, Panum and others. Nagel, more optimistic, and a fervent follower of Helmholz' empiricist approach, promised that the situation was not as dire as it seemed. We were witnessing a painful paradigm shift and Nagel was the savior.
Nowadays, Wheatstone's view of binocular disparity has become common knowledge, and whole industries (optics, cinema) are based on his analysis. Still, it looks more like a Pyrrhic victory than a technical knockout.
The Theory of the Identity of Retinal Images has not been abandoned, as one would expect, but has surreptitiously hidden behind its victor, thereby guaranteeing its survival.
Before I turn to that, I would like to look more closely at Wheatstone's approach.

Binocular disparity and its significance
Wheatstone gives two very interesting examples to introduce the stereoscope. The second one gets a very thorough analysis as it is at the base of his invention: two plane drawings, we would say 2D descriptions, are presented in the stereoscope with as unexpected result a life-like impression of a real spatial object.
More interesting is the first example of two identical, real three dimensional objects, viewed simultaneously through separate tubes, the way we would use binoculars, but then without the common point of fixation.
Wheatstone is of course intent on promoting his new device, and only uses this first example to illustrate the principle behind the main course. Strangely enough, and as far as I know, nobody since then has ever stood still by the significance of this first example. After all, it was the physical embodiment of a contradictory principle to what Wheatstone was trying to prove. Looking at two three-dimensional objects also gave a three dimensional sensation. So, where does that leave the theory of binocular disparity and Julesz' random-dot stereograms("Foundations of Cyclopean Perception", 1971)? 
The conclusion that two, minimally dissimilar, two-dimensional objects create the three-dimensional impression of a single object, is of course undeniable. It only becomes subject to doubt when used as the explanation of how 3D vison works. The assumptions on which it is based are quite strange, to say the least.
It assumes that each eye sees the world as two-dimensional. According to Nagel, and conform Helmolz's conception, when we look with one eye, we still see objects as three-dimensional, because that is the way we have learned to see them. A nice example of perception as inference. Also, a theory which one either accepts or rejects, because it can neither be disproved nor proven.
What is really interesting though is whether the third dimension is a real property of space or an illusion created by the brain for its own benefit. Do we see objects in three dimensions because there are three dimensions, or is the world in fact as flat as, or even more so than, ancient earth? The answer to this question should determine how seriously we take the theory of binocular disparity.
If space is three dimensional why would we need such a complicated detour to experience it as such? More to the point, why should the experience of the third dimension, if it does indeed exist, be considered as an illusion brought about by habits and learning?
In other words, the theory of binocular disparity, as advocated by Nagel (and not Wheatstone) and every author after him, only makes sense if space is two-dimensional, and depth an illusion!

The Theory of the Identity of Retinal Images had apparently to undergo a serious change, that might end up to be strictly cosmetic. It was saved by Panum's analysis of the horopter ("Physiologische Untersuchen über das Sehen mit zwei Augen",1858), first mentioned in the eleventh century by Ibn Al Haitham, and since then many times recalculated until the concept lost any meaning. Almost a century ago Danville in "The psychological significance of the horopter" (1933) drew attention to the fact that all the approaches to the horopter were geometrical, and giving different results because of different starting points. The difficulty in establishing a clear criterion had still not been solved in his time, as is also still the case today. He tried for a psychological approach where he tried to determine when humans experienced a single object as double. He found out that the theoretical delimitations of the horopter were of little use. Nowadays researchers speak of a theoretical and an empirical horopter. But the idea that there is one or more areas where both retinal inputs are seen as one, and others where a single object is seen as double. has not lost its attraction.
I have certainly no objection against such empirical assumptions that can be investigated scientifically. What I find less obvious are the theoretical prejudices that usually accompany such experiments.
The idea that the brain somehow not only needs corresponding images on both retinas, but also is capable of actively searching for them is, like I said elsewhere, a typical homunculus approach.
Let us take the example of two dissimilar images sent to each retina. Let us say, one object inclined to the left, the other to the right. Apparently, in such a case, the brain would try to somehow make fusion possible by rotating the eyes (cyclorotation) in an appropriate way, that is by limiting the disparity between both images. 

[Quiet George, Im' getting to it!]
The idea that the brain somehow knows what to do so that both images on the retinas could be considered as one is one of the most ludicrous assumptions that could ever be made. Still, the number of articles containing complex mathematical calculations and models is still growing. 
What is even more puzzling is that not a single article I read can in fact give an example of factual cyclorotation. All conclusions are based on geometrical models that predict eye movements and rotations (Hausen "Considerations on Listing’s law and the primary position by
means of a matrix description of eye position control", 1989). And even if they did somehow describe real cyclorotational movements, that would still not necessarily have to be explained by the need for the brain to bring two images together. It could as easily be explained by the fact that each eye is trying to keep focus of the image it is seeing, independently of what the other eye is doing. A kind of smooth pursuit as it were, around the own axis.

[Some references:
- Goodenough et al "Eye torsion in response to a tilted visual stimulus", 1979.
- Mok et al "Rotation of Listing’s plane during vergence", 1992.
- Van Rijn and Van Den Berg " Binocular eye orientation during fixations: Listing’s law to include eye vergence", 1993.
- Minken and Van Gisbergen "A three dimensional analysis of vergence movements at various levels of elevation", 1994.
- Howard and Rogers "Binocular Vision and Stereopsis", 1995
- Hooge and Van Den berg "Visually Evoked Cyclovergence and Extended Listing’s Law", 2000.]

Split brains, binocular disparity and cyclorotations Oh la la!
The last point (two disparate images presented to the brain and its reaction) reminds me of an urban myth that has surrounded Kim Peek for years, the model for Dustin Hoffmann's Rain Man. Many a writer on the web shamelessly affirms that Peek was capable of reading two books, or two pages, at the same time. Except the fact that neither Peek nor his father ever admitted to such a feat, a simple observation of Peek on the many videos on the web, while he is reading, shows him as looking (incredibly) rapidly at one page, and then the other. As somebody without a corpus callosum, Peek'brain was comparable to that of patients who had had the connection between both hemispheres disconnected to help them with their epileptic attacks.
Gazzaniga, the world's expert on splitbrains [even if his boss, Sperry, took all the credit with the Nobel Prize of 1981, at least that is the impression I got reading "Tales from Both Sides of the Brain", 2015, the author's declaration of loyalty notwithstanding] has a very interesting theory concerning the communication between both, isolated, hemispheres. Let us see if we can make use of it with our problem of binocular vision.
As formulated already in his article of 1969 “Cross-cueing mechanisms and ipsilateral eye-hand control in split-brain monkeys”, Gazzaniga is convinced that both hemispheres are still able to communicate with each other, even in the case by monkeys, where the ablation of the interconnections is much more radical than by humans.
A very specific way of communication is what he calls auto-ceuing. One hemisphere would initiate certain movements (head, eyes, body) to tip the other over what was going on.
The question is, how does the other hemisphere know what to do with the cues? Communication demands a common language and a common subject on which to communicate. He gives the example of one hemisphere having to push some lever but not knowing which, and the other hemisphere signaling by its head and eye movements which lever to push. Restraining the head made cueing as good as impossible, and the responses dropped to chance level.

What does that prove? 

To Gazzaniga, the existence of communication channels between hemispheres that are independent of the corpus callosum. Also, he is convinced that, even if he considers speaking of two different minds in the same body as going too far, the brain is made of a multitude of modules that work independently of each other, but that communicate somehow via this cueing process. A non-linguistic module will obviously need to get non linguistic cues to be able to decipher them, and other modules, just like in the example given, gladly comply.
This is a very interesting view that is I am afraid much too general to be falsifiable. So, I will leave it to those who can use it better than me. [better than I? George, you are not British, are you?]

Back to the main question. All modules must be "conscious" of the same situation and have the same goal. In our example, one hemisphere could cue the other if it knew what was expected from its twin. And the latter had to assume that the attempts to communication were related to the problem at hand, and not to the fact that its sibling was bored with the whole thing.
However you look at it, I would say that there was only one mind which had to make use of disconnected parts of his brain, and had somehow to improvise each time to get things done. If we consider the hemispheres as the repository of all the experiences the individual had had in his life, we could say that activating one hemisphere or the other brought different aspects of the same mind to the fore. A mind which we could, with exaggeration, consider as a kind of tabula rasa that gets written on separately each time by one of the hemispheres.

Okay, George, you're up!

George: in cases like this I have to be very careful. See, each time I am in a hemisphere, I am caught in its world and know only what it knows, and can do only what it can do. Then I get called by the other hemisphere, and I do not forget where  I had been, I even could remember what I was doing. Sort of. I mean, I still knew how I felt, I just could not remember how to do what I wanted to do.
me: you mean you had the know-what but not the know-how?
George: uh?
me: let us say that you hear a story from hemisphere A, sorry I can never remember which does what, you understand then the story, but when you get to B, you still know the sory, only this time, it is not in any language that B can understand, or even in the language that A told it in, so you have no way of telling it to B.
George: That's it! Are you sure you are not one of us?
me: you are one of me! And don't you forget it![mumble mumble].

In other words, there is common information to both hemispheres, and each hemisphere can be used by George separately, but George has no influence whatsoever on the content in each hemisphere. It can only make the best of it, wherever it is.
The link between both hemispheres can of course be put on the conto of sub-cortical connections, the question remains what these neural connections can convey. Gazzaniga speaks of emotions but he does that in the traditional way, as neural substrates in the form of a limbic system. I go beyond that and surmise that there is another dimension which is served by these neural substrates, and which make those emotions accessible to the whole organism. Such an assumption can, so far, easily be ignored.

binocular disparity again:
What does that mean for our problem? It would seem like the brain has a way of getting the information about one eye from one hemisphere, and combine it with the information from the other eye and hemisphere.
The problem is that it concerns the same kind of information in both hemispheres. No cueing would seem necessary. Also, even if one hemisphere could cue the other, not all eye movements, and certainly not torsional eye rotations, are under voluntary control. Furthermore, the cueing language would have to be very sophisticated to relay the need of rotating the eye around its axis until it fits the other image.
All in all, inter-hemispheral communication does not sound like a viable option.

A last remark: Gazzaniga often made the patients sit on their hands, or restrained their body movements because he considered the attempts of communicating one hemisphere with the other as a form of "kids cheating in a classroom". He was so convinced of his model, independent modules cueing each other, that he never stood still by the possibility that it was in fact one and the same mind that was trying to solve the problem as best as it could.

serial or parallel?
The stories relayed by the splitbrain tradition would certainly seem to favor the parallel view. The image of a patient trying to put his pants on while his other hand tries to take it off is a soap series worthy. Still, I would bet that even such a tragicomedy was in fact built out of two independent blocks: the hands, I surmise, were not working against each other at the same time, but alternately, however fast the change from one hemisphere to the other went. If I am right, that would mean that the same mind was indeed behind both series of actions, but each time following different experiential content.
Applied to binocular disparity, and the ubiquity of ocular dominance [King and Zhou "New Ideas About Binocular Coordination of Eye Movements: Is There a Chameleon in the Primate Family Tree?", 2000] I would say that we always see mainly through one eye. What the other eye, when it is open, adds to our vision is not fused with what we are already seeing, but added to it.
How can you blend images together? Fusion is something I would not know how to explain, neither in neuronal nor in supra-physical terms. 
What I can understand is the fact that we see not only what both eyes see, but then through the dominant or fixating eye, plus what each eye sees but the other does not.

Last but not least: Wheatstone stereoscope was aimed at foveal vision, the area which was declared free from binocular disparity by Panum, Nagel and others. Wheatstone has been neutralized, his invention taken as a gadget, but its theoretical implications ignored. 
Nonetheless the contradiction remains. You cannot have a working stereoscope and a disparity free fovea. One of them has to go.
*****

15 Addendum: 
Regarding the contradiction between a working stereoscope and a disparity free fovea it is only active as long as we consider binocular disparity as the cause of depth perception. Once we accept that it is only one of the ways the brain has to experience 3D, the principle that corresponding points in the retinas make binocular single vision possible can also be accepted as another empirical rule. Just like viewing two plane images gives a depth impression. We do not need to explain those principles, just acknowledge them and study their consequences.

By doing that we avoid at the same time the fallacy of the neural correlate.

A special case
When applied to the case where both retinas get a different input, we can take inspiration of the split-brain paradigm.
1) fast alternating fixation: the brain tries to fixate with each eye independently of the other.
2) ocular dominance: it settles on one view.
This, if normal conjugate fixation is artificially made impossible.

As far as the controversy between Helmholz and Hering concerning common or differentiated innervation of both  eyes, I think that both were right in some respects, and wrong in others. A discussion for another time.

******

16 Cyclorotations: do they really exist?

They are of course mechanically possible [I am so glad I do not have to say "metaphysically possible" for once!], but the only time I have seen them were in computer animations on the web. None of the videos I have watched concerning eye movements has ever shown an actual cyclorotation. So I will admit that I am a bit at a loss here. 

Here is another reason why I do not believe in the reality of cyclorotations.
Imagine looking at an a somewhat inclined vertical that suddenly moves to a sharper angle. The line will impinge on different photoreceptors of the retina. Cyclorotations are supposed to prevent that.

Why should they? We see an object in one position, then in another. The idea that we would not know that it was still the same object, the so-called correspondence problem, does not really make much sense. When we are looking at an object, we are not only using our eyes or isolated areas of our brain. The whole brain is engaged. That is also what makes smooth pursuit possible. Nobody ever wonders how that is possible in the context of the correspondence problem, even though both deal with the same so-called problem: the same object on different locations in space and on the retina.

The assumption that a single object has to leave an impression on corresponding parts of the retina was infirmed by the stereoscope. So, unless we want to attribute to the brain the faculty of distinguishing the cases of binocular disparity which should be respected, because they create a 3D image, from those where the disparity creates a double image, then we have to accept the fact that the brain has no control on any of these situations.

Besides, if it did, how come we still see double images where there are single objects?
Back to the horopter? Which version?

After-image and eye movements
The concept of cyclorotation was deemed indispensable with the discovery that the after image on the retina changed position with the movements of the eye. Helmholtz gave even an example of when the after-image rotated, and when it did not. Nowadays afterimages are not used anymore to study eye movements because they are subjective sensations that the researchers in the 19th century tried to elevate to scientific data. Magnetic and electric devices are used instead, like the so-called search coils. I honestly do not know the technical details, but apparently their results are fully compatible with those given on the basis of after-images. That is why I will consider them as similar and hope that I am not wrong.
After-images are persistent visual stimulations which do not need fixation to be perceived, even if they were born in the foveal area. When we move our eyes intending to fixate our gaze on another object, the after-images remains imprinted on the same location on the retina where it originally came to life. Seeing them rotate is then the only reason why the German physiologists [Panum was Danish and Donders was Dutch] came to believe in cyclorotations. 

We cannot fixate on an after-image, for the simple reason that the eye cannot fixate on the retina itself! The fact that the after-images has rotated in its new position, relative to the original one, does not mean necessarily that our eye has rotated around its axis. That is only the case if you take the after-image as starting point.
That would mean fixating the after-image before, during and after the eye has finished its movement. But that is of course impossible.
If the eye movement is unrelated to the after-image then the rotation of the after-image must therefore have another 
cause, not necessarily mechanical. After all, we are speaking of a visual sensation.

But what about a retinal image? Can it be said to rotate around it own axis as the after-image is supposed to do?
Well, such a retinal image could be compared to a drawing on or inside a globe. How could it ever rotate around its own axis?
*****