Back    All discussions

The Brain: some problematic concepts
Inhibition is a typical homunculus concept. It creates no problem when considered as an active reaction of the organism (stopping a movement to change directions for instance) since it would have a definite neuronal target. But how can the brain decide which neurons to inhibit when (primarily) engaged in the excitatory stimulation of other neurons? To go back to our previous example, how can the brain stop the stimulation of all memories that have something red in them?
Obviously it cannot. We have no control over which associations are activated at any time. We have all experienced moments when we were really grateful that nobody could read our mind because of the embarrassing thoughts or images that would just pop up in our consciousness. 
That does not mean that there is no (unconscious) inhibition at all. Maybe the embarrassing associations have been let through for reasons irrelevant to our problem now, but that at the same time many other memories were stopped from popping up. Still, such a hypothesis is unthinkable without a homunculus as agent de circulation or traffic agent.
Unless of course different psychological processes use different chemical signatures, which would bring us back again in neural code territory.
So, the problem remains: how is inhibition possible?
This brings up a peculiar analogy with the receptive field concept. This last concept was created by Sherrington (see my thread Lateral Inhibition and Receptive Field) in the context of his research on the scratch reflex. It had an obvious behavioral component that disappeared entirely in Hartline's definition and use. Inhibition also was studied first in the junction between neural and muscular processes. A behavioral component, even if it hardly was part of the initial experiments, was always present in the background in its electrical manifestation. The so-called end plate potential or epp, the end plate being the muscular region adjacent to the (motor) neurons, and which therefore reacted first to electrical, or chemical, stimuli.

[some references for that initial period are:
- Fatz&Katz: The Effect Of Inhibitory Nerve Impulses on a Crustacean Muscle Fibre, 1953
- Brock, Coombs and Eccles: The recordings of Potentials from Motoneurones with an Intracellular Electrode, 1951
- Dudel&Kuffler: Presynaptic Inhibition at the Crayfish Neuromuscular Junction, 1960.]

Another point of interest is the fact that at any time the majority of the brain cells, if we are to believe ECG's and brain imaging, are inactive. There is therefore no need to inhibit them preemptively. Such a large scale inhibition would have surely been captured by the current recording techniques. Or so I will assume.

When and where can inhibition be predicted? This is certainly a question that has to be answered by (brain) scientists, but allow me a few remarks.
1) Changing direction of movements (Geertsen et al 2011 "Spinal inhibition of descending command to soleus motoneurons is removed prior to dorsiflexion") seems a very evident candidate. As would be similar actions like starting a sentence and then stopping for whatever reason. Most examples would, I suspect, concern a behavior of some kind.
2) Non-behavioral reactions: that is where the problem resides. I am not sure it is even possible to conceive of a non-behavioral inhibition. At least not without the help of a homunculus.

[A very interesting article that ignores splendidly all these problems is: Wang et al "Feedforward Excitation and Inhibition Evoke Dual Modes of Firing in the Cat’s Visual Thalamus during Naturalistic Viewing", 2007. But then, you cannot expect anything else from a typical Hubel&Wiesel analysis!]

3) Interneurons: I have already mentioned a specific case concerning this fundamental type of neurons that are generally believed to form approximately 20% of all the neurons in our brain. There (see in my thread Neurons, "Action Potential and the Brain", the entry "Interneurons as nano-computers"), I analyzed the fact that interneurons are supposed to be able to react selectively to input without activating all the other neurons they are linked to. But such a function does not necessarily imply that the other neurons are somehow inhibited or stopped from reacting. A selective reaction can mean just that: only the targeted neuron is affected. Which lands us, again, in neural code territory! 
A way out of this dilemma would be the broadening of our perspective. We do not have to accept the, necessarily, limited view of the chemical analysis of an isolated neural circuit. The problem is that if we accept the results of Crandall&Cox, we have to take into consideration the fact that the selective activation of one neuron, because it has been demonstrated in vitro, is independent from the rest of the brain. That is certainly a result that needs pondering.
Starting from the fact that any (inter)neuron can have a considerable amount of synaptic spines, the question of how, each time, a specific target is activated becomes a fundamental problem without the (unworkable?) assumption of a neural code. But such a problem is, I would say, empirically tractable. Adapted versions of Crandall&cox experiments should be able to answer the question: given any interneuron, is it possible, with different stimuli, to activate different synapses selectively, starting from the same stimulating neuron?
A positive answer to that question would probably prove the existence of a neural code, or at least enhance its plausibility. Of course, it must be shown first that the used stimulation patterns are biologically plausible. 
A negative answer would be, I am afraid, mind blowing... If we make abstraction of another, simpler possibility: that interneurons always activate the synapse closest to the point of stimulation, or some other analogous chemical process.
There are therefore at least two questions that need answering:
- is there a form of non-behavioral inhibition?
- can different specific stimuli of the same source activate selectively different specific synapses?

There is of course the question whether our manicheist interpretation of inhibitory processes is the right one. Until now, and the chemical and electrical analyses seem to unconditionally support this view, inhibition has been understood as the opposite of excitation. A deeper reflexion on the subject would certainly be interesting for more than philosophers only.

The Brain: some problematic concepts
Remark: I am aware that the views on inhibition are much more nuanced than I give them credit for in the previous entry. Specifically, their role in the production of brain waves, on which I hope to come back in the near future, is a very exciting development. The same goes, but in a much lesser extent, for the different morphological and putative functional distinctions that are attributed to different types of neurons. Also something that needs a closer look.But the fact remains that we see excitation as the main source of brain activity, and of sensations, while inhibition, however active its functions may be depicted, is seen as a more modulating factor.
I am not sure that does justice to the complexity not only of the brain as a chemical system, but also to that of our experiential world. Still, that is nothing more but a vague intuition for which I cannot, at present, advance any meaningful argument.

The Brain: some problematic concepts
Here is something to consider. Imagine the following hypothetical case: change all Gabaergic (inhibitory) synapses (including the corresponding pre-and post-synaptic receptors), to Glutamatergic (excitatory) ones. Would that make any difference to the function of the brain, and to the way we experience the world?If my analysis until now is correct, then I can only answer by the negative.

Neural convergence
Still, an article by Freund ("Interneuron Diversity series: Rhythm and mood in perisomatic inhibition", 2003), is entirely based on the assumption that the nature of neurotransmitters is fundamental to the functioning of the brain. Which it is, of course, but maybe not in the way most authors think. [see also, (from Freund's group Mills et al "Differences between Somatic and Dendritic Inhibition in the Hippocampus", 1996); Soltesz "Diversity in the Neuronal Machine: Order and Variability in Interneuronal Microcircuits , 2006]
His "hypothesis" can be summed up as follows:
Pyramidal cells (in the hippocampus), get inhibitory excitation by two kinds of interneurons, each with its own neurotransmitters and set of receptors. One type connects with the region close to the soma of the pyramidal axon, while the other connects to the distal apical dendrites. 
According to him, the first type will have more of a structural role, representing "the rigid (non-plastic) precision clockwork without which no cortical operations are possible.", while the second type will have a more complex role in modulating moods and emotions.
He then sets to give arguments showing that the neurotransmitters and receptors involving the second type are all connected to anxiolytic drugs that work on anxiety and similar emotional processes.
I have no reason to doubt the validity of these last arguments, and will assume that they are correct. What interests me is the fact that both types are connected to the same cells. Whether they are activated separately or synchronously is really irrelevant, what matters is that, in my model, they should have the same effect since they are activating the same neuron (abstraction made of the different roles each configuration can play). 
In case they are activated at the same time, their combined effects will only show in the amount of depolarization or hyperpolarization of the post-synaptic neurone. The neurotransmitters, as we have seen in an earlier thread (Neurons, Action Potential and the Brain), will open the ion channels and be immediately reintegrated in the system.

This shows that the so-called difference in function between these two types cannot be proven with this kind of examples. Secondly, it draws our attention to the distinction between medical and general understanding of the brain. What may work medically, may not be the best explanation of how the brain functions.

The Brain: some problematic concepts
Memory and grandmother cells:

The problem of Hebbian assemblies (Hebb "Organization of Behavior", 1949; see chapters 4-5. Hebb gives the preference to statistics to deal with the problem of unwanted activation of assemblies ) is the connection between them. Since each assembly can be activated by a small number of its neurons, and since these neurons can never be considered as being exclusively connected to a specific assembly, we still have the problem of how to activate one assembly without activating them all. The protein synthetization as discovered by Kandel and colleagues, does not change the problem. In fact, if anything else, it makes it more acute. Such a chemical process is a general one and can therefore also not be used as a means of differentiation of different memories.

A way out of this dilemma could be a revised version of grandmother cells. The computational view which usually accompanies this conception (the neuron as containing multiple processes that would make the identification of grandmother possible) is of course unusable in my model. But, freed from this philosophical baggage, the idea could still prove to be very fruitful in combination with Hebb's assemblies. I am not sure it would not make Kandel's results superfluous, but let us find out.
However it happens, let us suppose that we have the memory of a single object. Many neurons all over the brain, when activated collectively and exclusively (however improbable that may be in reality), will form together this memory.
All these neurons individually will also be linked to other neurons making part of other memories.
What we need now is one neuron "to bind them all". That means that every neuron in this memory will be linked to this grandmother neuron. This last neuron has to have a very special place in (the functioning of) the brain, different from its grandchildren. It will only be allowed to make contact with other grandmothers, never with grandchildren directly. At this "higher" level (remember that there are no computations involved!), memories, and even perceptions or ideas, can interact with each other without running the risk of being overwhelmed by the sheer number of their family members.
This is of course pure speculation, and even if it contains any grain of truth, many details would steel need to be worked out.
For instance this fundamental question: how to stop neurons from activating more than one grandmother? I have already given my answer in this thread. I do not think it is possible. And I think that the statistical escape Hebb attempted is quite understandable: at a higher level the probability of activating neurons only faintly connected to our current memory gets lower, but never reaches zero. If it did, it would be the end of our creativity. We would never be able to think "outside the box".
This speculation also does not solve the main problem facing each attempt to find a memory trace: how are assemblies formed in the first place? Neither Hebb nor Kandel give a satisfactory answer.
I am afraid I do not have one either, So my speculation is somewhat hanging in the air,without any empirical basis. But then, that is how speculations are supposed to be.

The Brain: some problematic concepts
Lateral Inhibition in the Retina as non-behavioral inhibition?
I wonder. The cells to be inhibited are all known in advance, as is the case in neuromuscular junctions. And that is a major hurdle that inhibition of non-behavioral processes has to take and the problem that has to be solved before anything else. Not the inhibitory processes themselves are at stake here, I assume that they have been explained fairly adequately, but the determination of their targets.
I suppose that retinal lateral inhibition at least teaches us that there is a more general property of inhibitory processes than the behavioral aspect. The target must be known in advance.

[That inhibition has always been analyzed in the context of behavior and reflexes is confirmed by the respective Nobel lectures of Sherrington ("Inhibition as a Coordinative Factor",1932) and Eccles ("The Ionic Mechanism of Postsynaptic Inhibition", 1963)]

The Brain: some problematic concepts
What fires together wires together?
This catching slogan sums up very nicely the Hebbian hypothesis, and has always been taken as self-explanatory. The problem is that in an active brain many neurons fire at the same time which refer to different mental or bodily processes. A blind implementation of this principle would soon turn the brain into a dark soup where every neuron is wired with every other one. A situation reminiscent of the first stages of the development of the brain before the necessary pruning takes place.
Just imagine yourself watching TV, scratching your backside, and answering a very annoying phone call. What should be wired together in such a situation?

The Brain: some problematic concepts
Shape and Function of a Neuron    
There is something deceivingly evident in the idea that the shape of an object must play a role in its functioning. In fact, for many instruments, their shape is their function. Just think of the simple examples of spoons and forks.
The dissociation of both these concepts had to wait for the advent of computers and the realization that a program can be implemented on different kinds of hardware. Suddenly, the relationship between shape and function became more complex and wrought with ambiguities. Especially when the brain is involved.
It has never been unequivocally shown that the shape of a neuron had any influence on its functioning, beyond some obvious, or at least, easily calculated, effects.

 ["The more distal the synapse is, the slower the EPSP rises and the broader it is at the soma": Segev "What do dendrites and their synapses tell the neuron?", 2006]. 

More subtle attempts never come further than some empirical patterns that hardly allow any generalization. 
[Somogyi and Klausberger "Defined types of cortical interneurone structure space and spike timing in the hippocampus", 2005;
Shai et al "Physiology of Layer 5 Pyramidal Neurons in Mouse Primary Visual Cortex: Coincidence Detection through Bursting", 2015;
Sherman and Guillery "Exploring the Thalamus and Its Role in Cortical Function", 2006]

Does that mean that neuronal shape plays no role in the brain? Such a conclusion would certainly be premature, we are  still a long way from such a clearcut affirmation, but allow me this speculative prediction:
Just like neurotransmitters can tell us to which group a neuron belongs and which are its potential targets, the shape of a neuron can help researchers at least in their mapping endeavors of the brain. The mistake would be to attribute to neuronal shape the role of that of a spoon or a fork. But, apart from that, the brain, as a part of a living organism, has a history, and its evolution shows also in the different shapes neurons take, and their spatial distribution. 
The fact that shape cannot explain brain functions does not make it any less valuable as a tool of research and analysis.

The Brain: some problematic concepts
Intensity of sensation, color and memory
Henri Bergson said somewhere that a philosopher had only a single idea all his life. He was of course also, or mainly, referring to himself. This Nobel prize laureate (1927), was to be declared persona non grata after the discovery of DNA and the reformulation of Darwin's theory of evolution in genetic terms. His vitalist position was not salonfähig anymore, and men became convinced that such thinkers were just like the dodo: an evolutionary glitch.
[Another thinker that has known his fifteen minutes of fame, also a vitalist, was,  at that time, the much celebrated Arthur Koestler ("The Act of Creation", 1964), and he asked a very interesting question regarding evolution theory: how can it explain hereditary behavior of animals? Think of a complex behavior as the nest building ability of birds. The idea that minimal incremental genetic mutations could explain the emergence of such a behavior seemed ludicrous to him. I honestly do not know what tot think about this kind of issues, but I agree with Koestler that they cannot be simply put aside. I do not consider myself a vitalist, but too often evolution theory has been blindly put forward as the ultimate panacea for all kinds of problems. See the debate between Pinker "How the Mind Works", 1998) and Fodor ("the Mind Does Not Work That Way: the Scope and Limits of Computational Psychology", 2000) for a fairly recent illustration.]
The same Bergson undertook a crusade against the quantitative reductionism that was becoming dominant not only in biology and physiology, but also in the treatment of psychological and spiritual issues. He tried, in a plain language that contrasted forcibly with the technical jargon of scientists and philosophers alike, to advocate the qualitative study of non-physical matters. I remember reading almost all his books as a teenager, in the same time that I also tried to get beyond the first chapter of Hegel's Phenomenology of the Mind (The infamous preface was even less accessible to my unformed brain). Materialism [in its Marxist flavor, I have never felt any affinity with the Anglo-Saxon variety, which reeks too much of theology for my taste. I suppose that the strong, nation-wide, political presence of a Bible Belt could not remain unanswered] had not, yet, taken hold of my thinking, and vitalism has something very appealing to young minds, at least to me and some of my friends. I can still enjoy the poetry of a Bergson or a Koestler, even if, intellectually, I find the lyrics much less convincing than I used to.
Bergon's approach of the sensation of pain ("Matiere et Memoire", 1896;"Essai sur les données immédiates de la conscience", 1889) can, I think, still teach us something, even if it is not really what the author had in mind. His refusal of a quantitative definition of sensation did not allow him to speak directly of more or less pain. But he could not ignore the simple fact that some pains are more intense than others. So, to salvage his position, he had to broaden the field and consider pain as a Gestalt avant la lettre. In his view, more pain meant that more of the body was involved, we would say now, more pain receptors were activated. This is a view that no longer holds. As far as we know, the intensity of pain depends solely on the intensity of the stimulus, even when only a single receptor is involved. 
Still, Bergson's qualitative approach to sensation seems, strangely enough, to be vindicated by the results of my analysis. Allow me to elaborate.
We have seen that intensity, even if it can be reconstrued by an external observer, is probably not permanently registered in the brain. So, we could, in a way, say, that only the sensation as such is retained. A critical introspection would maybe convince you of this peculiar aspect of memory. Try to relive a sensation of pain that was not particularly strong, and compare  it with a much stronger pain that you had felt in another situation. Can you really "feel" the difference between these two sensations?
These introspective considerations cannot, evidently, be taken as the final argument in favor of my analysis. For that, I prefer to rely on the critical neurological approach I have tried to put in practice in these threads. But together, they form, hopefully, a strong defense against rival views, be they of a vitalist or a so-called materialist persuasion.

Let us now consider the memories we have concerning color. We have also seen that any color receptor in the retina can be the source of any color sensation whatsoever. The unanswered question until now is: how can the brain differentiate between the memory of different colors?
Well, that is quite the conundrum with the absence of a color code in the brain, is it not? That could mean one of two things:
- my analysis is wrong, and we must accept the existence of a hitherto unknown neural color code (which would open the door to all kinds of other neural codes and bury my approach unconditionally).
- we have no memory of colors, just like we have no memory of the intensity of pain.
I will of course plead for the second alternative.

The Introspective approach
I have to admit that I find it quite inconclusive. Colors are such a familiar phenomenon that we can hardly imagine an object without it. And more than that, we seem to be able to remember the color of an apple or a banana. A critical analysis of our own memories would undoubtedly unveil the role of language (and other mental processes) in these memories, and I would be very reluctant to rely on any introspection in this matter. I, for my part, seem to oscillate between two positions that seem both as clear to me: I can remember colors at one time, and doubt that I do at other times.
This very ambiguity would seem to be an additional argument against any simple view of color memory.

The analytical approach
The activation of a retinal receptor has always a definite color sensation as a result. Whatever chemical processes make it possible, we are able to retain at least a vague memory of a visual experience. What we cannot do is relive the actual experience. Memory of a visual fact is itself not a visual process. We do not see with our "internal eyes" the apple or banana we have in our memory. Which would be possible if the activation of the optic nerve could be done in reverse.
I am deeply reluctant to propose a mysterious residual effect of the optic neurons on the way we experience visual memories. The brain offers enough mysteries of its own without us adding to the mix. 
Still, I think that, translated into an acceptable empirical framework, such an assumption could prove to be fruitful, if only to show that it is the wrong approach. I would therefore welcome experimental attempts to analyze the (indirect) role of optical neurons in the recall of memories. But since we still do not have a satisfactory theory of memory in general, we can only hope that one day, this matter will be resolved in a non-speculative manner. 

- We would have no trouble recognizing a blue apple or banana. Many author would see in such an ability some computational process that makes such a feast possible. I personally think that comes because of the fact that color does not play a (decisive) role outside of vision.

- Even though color in dreams is known to happen regularly, black and white dreams seem to be the rule. Still, it is the exception (colorful dreams) that poses a problem. So dreams cannot help us decide on the matter of a neural color code.

In conclusion, I see no way to exclude the existence of a neural color code a priori. All I can do is point to the practical impossibility of such a code on the basis of our current knowledge when it is not obscured by philosophical considerations concerning the nature of the brain (and the mind). Also, I hope to have shown that, even though it cannot be proven yet, the idea that we do not have tout simplement color memories is not as strange as it sounds

The Brain: some problematic concepts
Statistics and probabilities are powerful instruments of analysis not only in the social and physical disciplines, but also in our attempts to understand brain functions. Because statistics and probabilities, just like any other form of human knowledge, are our own creations, we can safely say that they have been produced by the human brain. The problem arises when researchers start attributing similar faculties to the brain outside any worldly context, but simply as neurological functions. The distinction between a form of knowledge that has been created by the (human) brain, and the means which the same brain had to use to create this type of knowledge, is then irremediably lost. It then seems that any form of (human) knowledge had to be already present in a neurological form in the brain before it could be rediscovered as external knowledge. Brain researchers apparently do not find it strange if the brain has to use some form of neuronal statistical tools, or compute the probability of an action potential, even if the concerned individuals were to lack any shred of knowledge in those areas.
Such a metaphysical standpoint can hardly be refuted without positing a different metaphysical standpoint to counter it. I, personally, do not wish to be drawn into such polemics and will therefore simply state my personal conviction that there should be a clear distinction between what the brain creates, and how it creates it. 

The Brain: some problematic concepts
Inhibition shift
While reading Buzsaki's "Ryhthms of the Brain",(2006) [which I hope to analyze more thoroughly soon] I came across this bold affirmation that almost caused my heart to stop beating: "Pouille and Scanziani (2004) ["Routing of spike series by dynamic circuits in the hippocampus"] have demonstrated that fast input activation shifts inhibition from the soma to the dendrites" (Fig.3.8, p.73). [my emphasis]
If true, all my efforts until now would have been in vain. Such a shift would undeniably be construed as a form of neural coding, something I always have stated as improbable, if not impossible.
Fortunately, a careful reading of the article put my mind to rest.
The authors start with a very explicit vision of how the brain functions:
"Action potentials are the principal means of communication between neurons. It is believed that action potentials convey information through their timing and frequency. To process this information, the brain needs neuronal circuits able to extract and represent these temporal features out of series of action potentials."
The question therefore is whether their results support this view at least partially. I am happy to say that, if I understood them correctly, they did not make their vision plausible. Which would certainly would have been the case if 

1) they had chosen to stimulate single neurons in conjunction with simultaneous recording of two or more interneurons;
2) they had used biologically plausible stimulation patterns.

As far as the first point is concerned, this is how they put it: "Recurrent inhibition was evoked by stimulating the axons of CA1 pyramidal cells at various frequencies with an electrode placed in the alveus". In other words, there was no way of analyzing the relationship between the stimulation pattern of a specific neuron, and its effects.
"Although it is likely that a single CA1 pyramidal cell axon may contact distinct types of local postsynaptic targets, as has been shown for CA3 pyramidal cell axons, we cannot exclude the possibility that different interneuron types may be contacted by distinct CA1 pyramidal cells."
The results therefore concern (possibly) more than one source, and more than one target.

 Whatever the other merits of the article may be, one thing is certain, it does not prove the existence of a neural code. That means that the starting declaration cannot be considered as having any relevance for the results. It also means that, contrary to what Buzsaki suggests in his book, they have not "demonstrated that fast input activation shifts inhibition from the soma to the dendrites", at least not where a single neuron is concerned.

For the second point, the following quote will I think suffice: "the spiking activity of interneurons was monitored in response to series of ten stimuli at 50 Hz delivered to the alveus every five seconds... We gradually increased the stimulation intensity until the threshold for spike generation was reached within the first four stimuli in the series (that is, until at least one of the first four stimuli triggered a spike in at least 20% of the trials, without exceeding 80%)."
This kind of stimulations is no doubt valuable in a lab setting, but its biological significance is certainly not very convincing.

The Brain: some problematic concepts
Scientific Authority
Object formation (and other neurological processes) is still an unsolved puzzle. The difficulty of explaining such a familiar concept may explain the attraction of the mysterian/computational approach to the brain. Whatever we do not understand and (still) cannot explain is attributed to an unknown computation at the level of a single neuron, a population of neurons, or the whole brain. Having done that, we feel free to build on a mystery that has been labeled and neutralized. Neuroscience is practiced as if it were an exact science, with occasional lip service to the complexity of biological/neurological processes. The citation of articles has mostly replaced the need of a critical approach to any result obtained in a laboratory or in a so-called natural setting. The number of articles listed as reference, and the reputation of the authors quoted, has become the near-absolute criterion of scientific credibility.
But what works for the exact sciences can be very misleading when dealing with living organisms. Here, 2+2 does not always equal 4, and the results obtained by colleagues in the field can never be considered as absolutely reliable, even if they have been confirmed by other researchers. Whereas physical/chemical knowledge can be considered as incremental, one researcher building on the results of his predecessors, brain science seems forever frozen in the position of Alice running in a stationary position while the world is moving under her feet. This is a very uncomfortable situation with no easy solution.
One cannot expect researchers to, each time, first confirm their predecessors' results before starting with their own experiments. The belief that if there had been major mistakes, they would have somehow shown or be discovered earlier, is no different from the same belief shared by all scientists in other fields. It is an understandable and even necessary belief. Ranting about the fallibility of authority, even if entirely justified in the case of brain sciences, is therefore no solution. I do not have any solution either and will therefore end my ranting.
What I personally would like to see is an explicit formulation of which results are assumed as being true, and the implicit assumptions they rely upon. It would help young researchers in forming their own opinions instead of blindly referring to articles which they have, very often, not even read.
Detractors would say that, even if put in practice, such a recommendation would certainly not change scientific practice; and they would probably be right. But one may hope, right?

The Brain: some problematic concepts
Object Formation
I do not think we can explain how the brain of an adult, or even that of a young child, can learn to recognize objects if we do not start with the first time an infant opens his eyes (or even sooner).
A baby experiences, visually, very few objects in a non variable setting. A mother's breast (or milk bottle) and parents'faces are probably the few constants in his life for a number of weeks. It is not improbable that his world will slowly become populated with more and more familiar objects in different settings, and seen from different perspectives.
What psychology cannot teach us, is what is exactly happening in the brain that makes these familiar visual elements gradually come to life as separate objects. 
The absence of (visual) neural codes makes the question even more difficult to answer in a traditional scientific setting.
This is what this setting tells us: whatever process we can imagine that would make it possible for a population of neurons to be linked together, has to be exclusive to each object as perceived by the brain. Otherwise there will be no clearcut distinction between even remotely similar objects.
The problem of Hebbian assemblies is that they create more problems than they can solve. They are inconceivable without some manner of neural code. So, the question needs to be asked:
what if there are no assemblies in the brain?
It is very often assumed that the activation of a neuron is automatically propagated throughout the brain unless an inhibition process stops it somewhere. I must admit that I have been assuming the same thing. But the excitation (or inhibition?) of a neuron can also be considered as a pure local process.
The problem with such a view is the perennial question: what would make such a wave stop at a definite neuron instead of another one?
This problem can only be solved with the assumption that each activation wave has a specific target that is known in advance, pre-wired as it were. Let me clarify this supposition: every (visual) stimulation ends at a grandmother cell. That could be the result of the pruning that takes place in the brain in the first period of life.
 [Chechik and Meilijson  "Neuronal Regulation: A Mechanism For Synaptic Pruning During Brain Maturation", 1999. Even if their research concerns synapses in general and certainly not grandmother cells.]
Chemical processes of creation (remembering) and dissolution (forgetting) of synaptic links during the lifetime of an individual would gradually create dynamic neuronal configurations at "higher" levels. What Hebb called assemblies could then be considered as a logical abstraction referring to neuronal and chemical processes that take months, if not years, to solidify to what we know as individual objects.
[The grandmother cells can also be understood as the end of the sensory process, instead of an absolute ending. There is no reason not to assume that the grandmother cells start a new wave of activation involving other parts of the brain. In fact such an assumption is absolutely necessary.]

Imagine opening the front door and seeing a person whom you identified first as your friend Fred before you realized it was somebody else that looked very much like him. The first act of recognition was not a visual one but the result of different associations in your brain. We certainly do not see every relevant detail in a single act of seeing/looking, and luckily for us we usually do not need to. Recognizing an animal as a generic lion is more important, in most situations, than identifying an individual animal.
The question is, if one takes all the different versions one may have of Fred or a lion, could an external observer see a physical difference between the different neuronal configurations? 
The reader will have noticed that whatever the brain issue, we are, each time, asked to take position on the matter of neural codes.
Memory traces cannot be understood any differently from other possible neural codes. We could hardly say that there are no neural codes in the brain on one hand, and claim that different memories can be distinguished from each other on the basis of some chemical process on the other.
So how do we distinguish between different memories without a neural code, that is, without a memory trace?

Association often assumes a prior neuronal configuration and a secondary one that is activated by the first one. The problem then of the determination of the boundaries of the second configuration becomes very acute. If we accept the idea of grandmother cells, then the activation of memories become less mysterious. The problem is that we cannot blindly assume that different visual sensations from the same external object will be linked to the same grandmother cell.
Multiple grandmother cells referring to the same external object need to be linked together somehow. This is where the context in which an experience takes place, and the history of the subject, or his brain, become determinant. I think it would be wrong to try and define visual memories with the help of visual elements only.
The combination of grandmother cells and context would alleviate the necessity of memory traces. Furthermore, the absence of memory traces that could identify unambiguously individual memories, would, as already stated before, explain the possibility of "free" associations.
Once we have made the distinction between sensory sensations and memories thereof, there is no reason to stop there. It would not be difficult to imagine processes whereby only grandmother cells are involved, processes that would give rise to third-level neurons or higher. Each level would make it possible for the brain to approach a more abstract aspect of (internal or external) reality, without ever positing the need of mysterious computations.
Such a model 
[which would certainly deserve the title of a connectionist model without any symbolic logic involved. See Marcus "The Algebraic Mind: Integrating Connectionism and Cognitive Science", 2003 and the critique of Fodor and Pylyshyn about the connectionist approach, "Connectionism and cognitive architecture: A critical analysis", 1988. Also Chalmer's "Why Fodor and Pylyshyn Were Wrong: The Simplest Refutation", 1996] 
would only need, besides a world in which to act and survive, four types of elements :
- receptors,
- neurons,
- sensations,
- chemical processes.
Sensations would form their own dimension, giving meaning to the neuronal configurations and the underlying chemical processes. This meaning must be considered as fundamental not only for the brain itself, but also for any external observer. One cannot understand neuronal processes without referring somehow to the sensations involved.

I am, of course, speculating again.

The Brain: some problematic concepts
Thought has always fascinated philosophers (and since last century, computer scientists), but the question of what it is is very quickly replaced by how we think. Since the time of Aristotle logic has been the main approach to thought processes, slowly replaced by James' psychological approach. With Turing ("Computing Machinery and Intelligence", 1950) thought lost its spiritual luster (something that distinguished Man from Animal, and created a special link to the Divine), and was degraded to an unknown number of algorithms.

For a while, thought was equated to the von Neumann architecture ("First Draft of a Report on the EDVAC", 1945), with its philosophical pinnacle represented by Fodor's "language of Thought", 1975). Rummelhart and McClelland ("Parallel Distributed Processing: Explorations in the Microstructure of Cognition", 1986) represented the so-called connectionist alternative. Still, as mentioned in the previous entry, this approach did not really convince the twentieth century intelligentsia, even though everybody agreed that the form of connectionist networks was much closer to brain structures than classic computers.
Somehow, the fact that neural networks never attained the independence level of a living brain, made it impossible for them to be seen as more than a, sometimes, useful gadget.
I will not attempt here a thorough analysis of the respective pros and cons of both architectures. Especially since I agree with the view that the connectionist approach is, until now, but a variation on the classic symbolic view of thought.
Perhaps that comes because the question "What is Thought?" never really got any priority as far as I know. So, instead of focusing on "how we think", I propose to try and create a framework in which the former question can be investigated.
This is of course only a first crude attempt in  a very complex matter.

Does the Brain Think?
I will not consider the obvious objection that only an individual with a body anchored in a social setting can be said to be able of thinking or any other mental activity. I always used the concept of "brain" as including such a minimal representation.
Having settled, at least provisionally, this delicate issue, let me come back to the question that lies at the heart of my preoccupations: can we distinguish neural processes in the brain that can be considered as the carrier of thought processes as found in Aristotle's Logic or in modern textbooks of Logic?
People who have read my threads will already have guessed my answer. But allow me to make the question more specific:

We have no doubt that we have sensations, feelings and emotions.

 [Whatever the distinction may be. Damasio ("Descartes' Error: Emotion, Reason, and the Human Brain", 1994; "The Feeling of What Happens: Body and Emotion in the Making of Consciousness", 1999; "Looking for Spinoza: Joy, Sorrow, and the Feeling Brain", 2003) builds a whole theory on this distinction. I will restrain from such subtleties and lump all three concepts together, keeping only the idea that only goes hallway as far as I am concerned, that feelings and emotions are indispensable to rationality. I would gladly use an overarching concept as "qualia" if 20th century philosophers had not irremediably damaged its usefulness with their scholastic debates. We still have no answer to the question "how many angels can stand on the head of a pin", and likewise, I am afraid that specialists in the third millennium and beyond will still be debating the exact meaning of "qualia" for long forgotten thinkers. Still, I dare predict that Ned Block's hedonistic predilection for orgasms will still be used as an illustrative introduction to the concept.]
But do we have thoughts? And if we do, where could they possibly generate in the brain?
It is generally agreed that the limbic system is the neuronal seat of emotions, and that (internal and external) sensory organs are the origin of sensations. The frontal lobe is the favorite cranial location for putative thinking processes in the brain, even though Phineas Gage's case (the 19th century foreman that had an iron rod spear his frontal lobe) showed that many other mental abilities traditionally connected to thinking remained intact (see Damasio's 1994).
But I am not interested in phrenological arguments. Whether thought can be localized, or must be considered, like many other brain functions, as distributed all over the brain, is not really interesting in itself (except of course for pharmacological research and brain surgeons). What I want you to think about is about the nature of thought itself. Is it what philosophers would call a "natural kind"? Is there something that is clearly, even if not easily, distinguishable from sensations/feelings/emotions, memory, action? 
Here is what I think in a nutshell: rationality is a form of emotionality. Rationality embodies the set of rules that do not depend on empirical conventions as created by human history, but on what has survival value for the species as such, and is therefore desirable. Needless to say that it cannot exist in a pure form. I regret the Platonist connotations of such a formulation. They should not be considered as a necessary component of the definition. 
That means that logical rules, just like social conventions and ethical principles, are the product of society and nature. This goes beyond naive conventionalism, because it acknowledges the fact that some rules, based on physical laws and genetic and supra-sociological [that can be found in any human society] principles are inevitable. I will not discuss the putative spiritual/metaphysical origin of such laws or principles, that is a choice that each thinker has to make for himself. I just want to make plausible the idea that there is no organ of thought, and that therefore, it is very much the question whether there can be something like a language of thought.
More importantly, even if there exists such a thing as Thought, it would not and could not show in the brain. It would necessarily have the same status as sensation/emotion/feeling, with no neural trace to show its existence.

The Brain: some problematic concepts
Where do sensations originate?
This, I 'm afraid, is a point where metaphysics seem unavoidable. At first hand, it would seem that the only sensible answer would be: at the receptors. After all, any trace of the sensation disappears afterward. Still, such a logical explanation for a non-physical matter may not be the right answer. The sensation may last long enough for it to be registered, in a neutral if not neutered way, somewhere else in the brain. This last alternative would explain how we retain, vaguely, memories of color or pain, whereas the first answer provides no down to earth possibilities. A connection between receptors and some other part of the brain would provide something of a neuronal correlate to sensation, even if it did not contain in itself any sensation in any form. It would also offer the hope that one day we would be able to solve at least half of this metaphysical problem.
Another advantage of a neural correlate is the puzzle that has been created by the discovery of Graham& Hartman (1938) and Hofer et al (2005), that any single receptor can convey any color sensation.
First we must realize that the alternative, each receptor conveying a single specific color sensation, would certainly not be easier to explain. In fact, we would lose the benefit of retinotopy, and would have to explain how the brain makes sense of color sensations that are not spatially related to objects in the real world. I shiver just thinking about the sheer complexity of such an approach.
Retinotopy demands receptor neutrality. This way we know where each sensation is coming from, even if we still do not understand how such a sensation is possible.
The idea of a neural correlate is, alas, a fallacy. We would still need to explain how this correlate would be able to register all the possible color sensations that can be created by a visual stimulation. In fact, its usefulness would be inexistent, we might just as well assume that general memory processes, whatever they are, would explain as much. 
Our choices really seem limited. Either we assume that sensations, once we stop feeling them, disappear entirely, leaving only general neuronal effects. Or, we appeal to some unknown dimension that somehow remains linked with the physical one.
What I really find interesting is not the final choice that everyone has to make for himself, but the fact that such a dimension cannot be ignored whatever choice one makes. Explanations will differ with each respective brand of materialism or idealism, (this is definitely another issue that seems specially created for the joy of philosophers), but the point is, as far as I am concerned, that philosophers are at least trying to come to terms with the problem. I think it is time neuroscientists pulled their heads out of the sand and took their responsibility seriously. They should be the one torturing their brains on this issue, not the philosophers.

The Brain: some problematic concepts
Remark on Anosognosia 
Most approaches of this aliment have been inspired by a psychiatric approach ( Prigatano et al 1991).  One of the most famous researchers involved in the initial attempts to understand this disease was none other than Sigmund Freud. 
Carvajal et al "Visual Anosognosia (Anton-Babinski Syndrome): Report of Two Cases Associated with Ischemic Cerebrovascular Disease", 2012) present a more physiological approach. They report of patients who suffered anosognosia related to ischemic stroke, that is the result of deficient blood circulation in the brain, and what is of special concern to us here, to the occipital lobe. There are no indications of further degeneration effects in the brain, so I will assume that not only the retina and the optic nerve were intact, but also the LGN. Ischemic strokes require urgent medical treatment, unlike blindsight which is not life threatening.
The hysterical or mental version of anosognosia is of no interest to us here. The continued presence of visual sensations not related to reality  can be attributed to a deficient relay of impulses from the occipital lobe. The presence of LGN as part of the Thalamus would indicate, a view shared by all, that it plays a central role as relay of the visual sensations themselves to other parts of the brain. The exact pathway followed by visual sensations remain I'm afraid a mystery. Still, I am convinced that we will one day be able to draw it unerringly.

The Brain: some problematic concepts
Logic and the Brain

"...the logical structure [of our thinking can be laid bare by isolating a number of key words and phrases like 'and', 'not', 'every', and 'some' (Suppes, "Introduction to Logic", 1957). This sums up very nicely the distinction between modern (also called mathematical) logic and tradition Aristotelian logic centered around the analysis of every day sentences in plain language. I will not concern my self with proposition logic, nor higher order forms, since my goal is simply to illustrate the idea that rational thinking cannot be considered as a "natural kind".
First let me try and defuse a possible misunderstanding. It would be a big mistake to think (I cannot and will therefore not try to avoid its normal use in everyday language) that my analysis will make it possible to take a random logical argumentation out of a textbook or scientific paper, and simply point to what would amount to an action, the origin of a sensation, or the experience of physical laws. Not that it should not be possible. In fact, not being able to do that would speak very firmly against the plausibility of my thesis. Nonetheless, being able to decorticate a logical argumentation in the elements mentioned would strip away the essence of the whole. There is definitely a vast distinction between a logical argumentation, and one based on concrete elements like emotions or the consequences of the law of gravity. Saying that every logical thought has an emotional origin does not entitle us to throw away the "traditional" distinction between rational and emotional thinking. Nor saying that physical laws or the survival of the species, because they contributed greatly to the emergence of what we call thinking, are what constitute the direct motivation and cause of deployment of any logical endeavor. It is one thing to point to the origin of logic, and another to equate it with its points of origins. Non-creationists believe that their primal ancestors came from the sea, but that does not mean that they would, normally, throw themselves over board from a faring cruiser, in the middle of the ocean and without any equipment, to enjoy a spiritual return to their roots.

Having said that, I do believe that an analysis of the same elements mentioned by Suppes and the abstract patterns in which they can be used, would also lay bare the non-logical foundations of logic. The fact, for instance, that we cannot use those terms indiscriminately is itself not a logical rule. Logic can be applied to the totality of rules and conventions that we consider as correct or acceptable, but it can also be applied to what makes us think of them as such. In the latter case, we need non-logical arguments to explain this belief. [A wonderful illustration of how trying to found Logic on logical principles leads to an infinite regress can be found in Lewis Carroll: "What the Tortoise Said to Achilles", 1895. Carroll is known mostly for his "Alice's Adventures in Wonderland", and much less for his logical work.]

I would much prefer to take an example out a modern textbook like Suppes' or the classical "Mathematical Logic" by Kleene (1967) as that would entail more prestige for my analysis, but readability (and a healthy dose of realism as far as my mathematical understanding is concerned) compels me to confine myself to an elementary syllogism:

A. All men are mortal
B. Elvis is a man
C. Elvis is mortal

First, we need a context. We could be in class, following a logic course. Or, more interestingly, having a debate with somebody claiming that Elvis still lives (even though he has already left the building).
The conclusion seems inescapable. Why? Especially since it fails to convince some fans of this famous artist. Let us not dwell on the psychological ramifications that make such fans refuse this conclusion. More interesting is the question why we believe in it.
First, let me state that no human being ever argues purely according to logical rules. Second, even in an abstract context, the logical patterns used for instance in computer design are, when not already divided in isolated blocks, beyond the mental capacity of but a few geniuses. Even gifted students in mathematical logic need to pour on the texts and torture their brains to try and follow all the steps that make some of the examples found in textbooks. Logic as a discipline does not give a realistic view of human thinking.
The choice of a simple syllogism is, after all, not such a bad idea.
Computers, which certainly use a form of (binary) logic, would seem to prove that thought is a real process, since it can be implemented in a material substrate. Even the classic computer is built from transistors which somehow remind us of neuronal circuits. Von Neumann architecture may rely on symbolic logic, but the hardware is closer to the connectionist ideal than we would think at first hand. Logic gates do not resemble any syllogism or formal rule, except in their interpretation.
Here is a fundamental question that certainly has bearing on our issue: do we have the neurological equivalents of logic gates in the brain?
Let us start with the obvious. Man has invented logic (as a discipline), and computers. Therefore there must be something in the human brain that made the conception of logic gates possible. This will remind the reader of the question whether neuronal circuits can be said to use mathematical or statistical function in a neuronal form. Such innate knowledge would then make the existence of neuronal logic gates even more plausible: Man could invent computers because his brain already functions like one.
"The proof is in the eating of the pudding", which means in our case that only a practical implementation of my thesis would count as a plausible refutation of (at least the exclusive) validity of the computational view. The latter has, I am sorry to say, home advantage, in that we already have computers made by Man.
The only argument that could weaken it in the long run is, or so I hope, to systematically demand that its proponents show empirical examples of computations in the brain, instead of blindly assuming their existence. Until such computations have been indubitably shown to exist [after critical analysis by "non-believers", after all, who would believe a priest claiming to have found the ultimate proof of the existence of God, besides his own parochy?], the computational view will remain what it is: a powerful and influential, even if not entirely rational, belief.

Back to our syllogism. Here is another question. When do we think in syllogisms? My guess would be never... Unless we are unsure of our conclusion or wish to convince somebody else. 

A. All men are mortal
The logical quantifier (all) would be very difficult, if not impossible, to translate in a logic gate. In fact, it would be indistinguishable from "some [men]", or even "a [man]" or "this [man]". Only the context, properly interpreted by a human brain, would make it possible to distinguish between the different meanings. A fundamental aspect of human logic, quantifiers, cannot be translated into a material substrate, the mathematical symbolism that gives to any statement a mystical aura of scientificity, notwithstanding.
We do not have of course to define each concept by a single logic gate. "some" could be translated by a conjunction of different operations, one symbolizing the membership of an element to a set, the other, the opposite, the fact that an element is not a member of the set. We would very quickly realize that we need the same concepts of 'all' and 'some" to define operations in which those terms are used. That is why they are called 'primitives'.
How does the brain then arrive at concepts like 'all' and 'some'? Computer scientists have been haunted by the so-called halting problem ever since it was discovered by Turing (" On computable numbers, with an application to the Entscheidungsproblem", 1937) and Church ("A Note on the Entscheidungsproblem", 1936). There the question was how a computer program could decide in advance whether an algorithm would stop when encountering a solution or go on indefinitely. This does not look like our problem at hand, at least, superficially. A closer look would show that it is not only computers that are experiencing a halting problem. Brains also have no logical way of deciding in advance whether a problem could be solved with a finite number of steps, or go on indefinitely.
We have all experienced, one time or another, a computer crash. We know that the processor is somehow still crunching numbers, even if there is nothing to see from that activity on our screen. Here is what happens in a crash, illustrated by a very elementary Basic program:

1: do A
2: go to 1
3: end

Humans have no problem with recognizing an infinite loop. They realize immediately that the third instruction (end) would never be deployed. Humans only start having problems with infinite loops when the programs are so long and so complex that they cannot see the overall structure. Computers would be fooled even with this simple algorithm.
It would therefore seem that human, unlike computers, can solve the halting problem within practical limitations. How do we do that?
Here is my own interpretation, whatever its worth.
We are able to oversee the algorithm in its entirety, and therefore see that our final goal is the third instruction. But, the second instruction tells us to go back to where we came from. It is not so much the fact that we have to, temporarily, abandon our final goal, than the fact that we are asked to do the same thing over and over again. We would have no problem with an additional instruction telling us " do B", even if that also meant postponing the arrival at the final destination. That is what would make us follow along a very long list of instructions when we really want to get to the final step. This list though cannot be indefinitely long. No motivation is strong enough for, say, millions of instruction. We would either die before reaching the end, or, what is more probable, we would just get fed up with the whole process and quit.
So why do we need only a cursory look at our simple example to arrive at the conclusion that it is an infinite loop? Here is a possible, rational explanation, one I have already given, and the only one that I can think of:
We see that we have to go back each time to the same instruction and do the same thing.
But is that an explanation or merely the description of what we would be doing if we followed blindly the instructions?
An explanation would be why doing just that is not the correct thing to do. "It makes no sense to keep doing the same thing over and over again without any change from one trial to the other" could be such an explanation. But like the tortoise would tell us, that is certainly a rule that we need to write down in our notebook.
To make a long story short, it seems to me that there are many reasons why following the instructions blindly would revolt us. (see the beautiful book by Camus "Le Mythe de Sysiphe", 1942). None of them is purely logical or even rational. In a word, we would hate it!
I find therefore the following conclusion very justified: the human brain, just like computers, is incapable of solving the halting problem with pure logical means. The decision to stop finds its ultimate justification in an emotional reaction.

Now we can go back to to our syllogism again, and asks ourselves how the brain deals with a universal quantifier like 'all'.
What does the expression "all men" mean?
- all men that have existed, exist and will exist?
- how about men that could have existed but did not because of abortions or other accidents?
Apparently, we would have no problem defining this universal quantifier. 'All men' seems to be just that..'All men'. Every example you could give will just be added to the set. But wait a minute! How many men are talking about here? Billions, zillions, whateverillions? That is more men than we have neurons to accommodate. We cannot possibly have even one neuron for each one of them, so how will our brain be able to refer to 'all men'?
We apparently have to abandon the idea that the neural 'all men' refers to possible individuals as concrete neural entities. How about all the men an individual has known in his life. We would then not need to assume more than his brain can handle.
We would of course need to go beyond "all the men I have met in my life"  [this sounds more and more like a Dolly Parton song] and arrive at "all the men i have met or could possibly meet". Which would bring us to the same problem as before.
Maybe we have to lose any concrete property that is attached to the concept 'men'. It does not really matter who we have or have not met, 'men' becomes 'Man'. Instead of 'all men are mortal', we would then have 'Man is mortal". This concept could easily be represented by a one or multiple grandmother cells, and we would have our neural correlate for an abstract concept. This neural correlate would be different from one individual to another. 
[For some, 'Man' will mean 'all white men', for others 'all non-Jews' or 'all non-Arabs', or 'all Chinese'. Luckily, in the end, we are all Muggles.]

 I will leave it to professional philosophers (by that I mean those poor souls who have to "publish or perish") to decide if 'Man' has any other significance besides these individual variations.
Here is the punch line: the neuronal correlate of 'Man', as an empirical manifestation of the concept, will always be determined by the individual's brain that contains it. 
Maybe we should look at a philosopher's brain. Concepts are, after all, their bread and butter. What would the neural correlate of 'Man' look like in such an enlightened brain?
Obviously, there will be all kinds of associations linked to the 'Man' neuron, which will have to do with the specific knowledge our philosopher will have acquired in his lifetime. Besides that, I could not name any property that would be fundamentally different from, or absent in the 'Man' neuron in the brain of an illiterate. 'Man' can only come from our empirical experiences with people, and what we have learned in life. Associations will change with time and experience, the only neuronal constant (if we make abstraction of the different ways a word can be pronounced by different individuals, and the foreign languages we learn) will probably be the (written) name we give to the concept.

Still, to transcend our own experiences we will need something else in the form of specific associations that would have their (abstract) equivalent in all individual brains: the possible transformations we can apply to our empirical experiences. When we say "Man", we may think of an abstraction based on those experiences, but we can also imagine that those experiences could have been different. That we would have met other people, or that other people in other times could have similar experiences. It is those transformations I consider as supra-sociological, bordering on the genetic. Not their results, which can only be an empirical variation particular to a certain individual. Those transformations themselves do not need to be a mysterious faculty with its own specific rules and organs. They are regular mental operations which are used in other contexts and for different problems. The difficulty of defining them in an abstract way is related to the general difficulty we encounter in explaining even the simplest of our mental activities. Their mystery is the mystery of the brain in general. They do not make our task of understanding the brain any easier, but they do not make it any more complicated either.

The Brain: some problematic concepts
Point of Origin of Visual Sensations and Blindsight

There is a very disturbing discrepancy between the phenomenon of Blindsight on one hand, and, on the other hand, anosognosia (or Anton-Babinsky syndrome, Prigatano et al "Awareness of deficit after brain injury: clinical and theoretical issues", 1991) and all kinds of cortical plasticity phenomena. For the latter, the work of Bach-y-Rita in the 60-70's ( "Vision substitution by tactile image projection.", 1969; "Brain Mechanisms in Sensory Substitution", 1972; Cattaneo&Vecchi "Blind Vision", 2011) on using the skin or the tongue to process visual information, is beautifully complemented by the neurological experiments of The laboratory of Mriganka Sur at M.I.T (Sur et al "Experimentally induced visual projections into auditory thalamus and cortex", 1988; Roe et al "Visual Projections Routed to the Auditory Pathway in Ferrets: Receptive Fields of Visual Neurons in Primary Auditory Cortex", 1992). Except for blindsight, all these phenomena seem to reinforce in me the idea that the cortex can be considered merely as a memory system. This is certainly not the conclusion of the different authors mentioned,  as they all subscribe to the view that cortical plasticity somehow proves the versatility of the brain areas in computing inputs that were originally meant to be processed elsewhere: the auditory cortex was for instance made to "process" visual inputs, and that with great success.
Bach-y-Rita made it possible for blind people to "see", thanks to simple devices that were placed on the back, and later on the tongue. Patients learned to interpret the electrical impulses that were felt on the skin or tongue as if they were "visual" inputs. They could distinguish between shapes and successfully navigate freely without bumping into walls, people or objects.
Likewise, the ferrets on which Sur's team experimented their surgical skill,learned to react to visual stimulations that were rerouted to the auditory cortex. I was, at first, a little bit skeptical about the methods used to determine the success of those operations. It looked like the articles mentioned relied mainly on histological analysis, or at least on the recording of cortical cells in restrained animals. At my request I was sent additional articles which clearly showed that the results also concerned behavioral aspects: the "rewired" animals did react to visual stimuli to the auditory cortex, just as normal ferrets reacted to the same stimuli to their visual cortex. (Sur "Rewiring cortex: cross-modal plasticity and its implications for cortical development and function", 2004; von Melchner et al "Visual behaviour mediated by retinal projections directed to the
auditory pathway", 2000.)
As for anosognosia (often presented as the opposite of blindsight), which also involves lesions to the primary visual area just like blindsight, the fact that patients kept affirming that they could see, while all tests and observations proved the contrary, makes the role of the cortex even more to a puzzle.
The primary visual area seems in one case, blindsight, indispensable to the production of visual sensations. No V1 means no sensations, even if a rudimentary form of vision is kept intact. While in the second case, anosognosia, visual sensations are apparently kept intact even if visual abilities have entirely disappeared.

What is it like to be a rewired ferret? 
I will refrain from any analysis of the classical "What Is It Like to Be a Bat?" by Nagel (1974) and simply remark that even our ignorance of what the animals experienced ["the precise quality of the light-induced sensation in rewired animals remains unknown", von Mechner et al, 2000.] does not change the fact that they were apparently experiencing something, be it visual or other. And that is definitely different from Blindsight.
Another article (Huxlin et al "Perceptual Relearning of Complex Visual Motion after V1 Damage in Humans", 2009) makes the puzzle even more complex. The authors trained intensively and extensively (between 1 to 2 years) patients that were suffering from damaged V1 (the primary visual area). The trials were specifically targeting the "blind" areas in the retina and the brain. The results showed noticeable ameliorations in perception and visual awareness: blindsight effects were neutralized, people could sense visual stimulations. At least as far as motion is concerned. The article does not mention vision of stationary objects, the reason being that "several visual pathways survive V1 damage...[which] mediate  complex visual motion perception".  Which leaves the mystery of blindsight complete. 

[See for an unconditional support for blindsight Ffytche&Zeki "The primary visual cortex, and feedback to it, are not necessary for conscious vision" 2011; and for a critical review Cowey "The Blindsight Saga", 2009; a complete rejection can be found in Campion et al "Is blindsight an effect of scattered light, spared cortex, and near-threshold vision?", 1983.]

Here are the main points that deserve our attention:
1) The primary visual area cannot be considered as the locus of visual sensations. Patients with the Anton-Babinsky syndrome do have visual sensations, even if theses sensations seem totally unrelated to reality. Most researchers consider the subjective reports as "confabulation", even though it seems strange to believe blindsight patients on their words, and refuse the same courtesy to anosognosia patients. What happens to these people is I think somehow similar to the experiences of blind people who recovered their sight after a cataract operation (von Senden "Space and sight: the perception of space and shape in the congenitally blind before and after operation", 1960). Those patients had many difficulties to adapt to their new sense, and understand the visual world. That is of course not the case with the anosognosia patients, even though their confusion is reminiscent of that of the cataract patients. I personally believe that they did have visual sensation, since their retina and optic nerve were intact. The lesions in the cortical pathways, allied to the degeneration of the LGN that rapidly follows lesions of V1, were, in my view, the cause of the discrepancy between their visual sensations and reality.

2) The fact that a sensory area like the primary auditory area A1 can be recruited for visual stimuli makes me believe that the reverse (auditory input to V1) would work as well. We can of course assume some mysterious processes that would make it possible for the brain not only to know that it is receiving a different kind of input, but also what to do with it. Or we can go for the simplest explanation. The (primary) cortex accepts whatever input is sent to it for the simple reason that it does not do anything to change or 'process" this input. Just like ram-memory in a computer.

3) The solution to the puzzle created by the relation between sensations and the cortex, and phenomena like blindsight can, I think, be found in Sur et al (1988): "Retinal targets were reduced in newborn ferret pups by ablating the superior colliculus and visual cortical areas 17 and 18 of one hemisphere[...] Ablating visual cortex causes the lateral geniculate nucleus (LGN) in the ipsilateral hemisphere to atrophy severely by retrograde degeneration."
In other words, the rewired ferrets were turned into blindsight patients! What saved them from this fate was the fact that afferents to the LGN had been rerouted to the medial geniculate nucleus (MGN), making them use the auditory pathway! As we have seen, these animals had no problem reacting correctly to visual stimuli, even though we have no way of knowing what they really "saw". This is where the experiments in sensory substitution can give us a clue. Maybe what the ferrets were experiencing was similar to what human blind people experienced with skin or tongue devices as surrogate for "real" vision. 

4) I am aware that all these considerations do not prove beyond a doubt that sensations do not originate in the cortex, even though I hope that it makes the idea at least plausible.
What rests for a final piece of the puzzle is a clear picture of the cortico-thalamic connections, and from there the connections to the limbic system and the brain stem.
Many books have been written on the subject, and almost as many questions are still waiting for an answer. At least one think is certain, we can exclude the neo-cortex as locus of sensations. It is just too young for such a heavy responsibility.
A point that also deserves attention is the necessity for the visual pathway to store the incoming visual inputs at least temporarily. It is certainly conceivable that we are not seeing what is immediately impinging on our retina, but what has already left it to the optical nerve, if not later. If that is the case, a disturbance up the pathway, in V1 for instance, could certainly disrupt the relay of visual sensations. The visual cortex would be and indispensable link in the creation of visual sensations, even if it was not the locus of their "creation". [This analysis would, I'm afraid, leave open the question of why anosognosia patients do have visual sensations. The jury is still out on this.]

5) Imagine going further with the experiments of the Sur group. What would happen if the "rewired" auditory area was it self rerouted to the areas to which the primary visual area normally projects. In other word, what if we only swapped V1 and A1, but further kept everything the same? Is there any reason why the rewired animals would behave any differently from normal animals?

The Brain: some problematic concepts
Due to a computer mistake, the follow up to the previous entry which should be:Remark on Anosognosia 
has been misplaced.

The Brain: some problematic concepts

Neural Integrators

[ Robinson  
- "Eye Movement Control in Primates", 1968;
- "Oculomotor control signals". In G. Iennerstrand and P. Bach-y-Rita, Eds., Basic Mechanisms of Ocular Motility and Their Clinical Implications, 1975;
- "The use of control  systems analysis in the neurophysiology of eye movements", 1981; 
- "Information Processing to Create Eye Movements", 1992;

Cannon&Robinson "An improved neural-network model for the neural integrator of the oculomotor system: more realistic neuron behavior", 1985;
Anastasio&Robinson "The Distributed Representation of Vestibulo-Oculomotor Signals by Brain-Stem Neurons", 1989;]

As you can see, Robinson has been studying studying eye movements for about half a century. One of his achievements is the creation of neural networks that are supposed to be very realistic models of what happens in some parts of the brain involved in  ocular and vestibular reflexes.
A recurring theme in his articles is the idea that neural networks give a much more useful and complete picture of neural processes than what he calls block diagrams. You cannot poke in diagrams, while neural networks react just like neurons do.
In fact, the resemblance, in his articles, between the model and the real thing is just uncanny .
Here is how he describes what happens in ocular and vestibular neurons and muscles.
The brain receives ocular and vestibular input which must be mathematically integrated to produce the necessary muscular commands. 
"There are three major oculomotor subsystems: the VOR; the saccadic system that causes the eyes to jump rapidly from one target to another; and the smooth pursuit system that allows the eyes to track a moving target. Each appears in the caudal pons as a velocity command ." (my emphasis)
The article (1992) continues:
"Muscles are largely position actuators; against a constant load, position is proportional to innervation. The motoneurons of the extraocular muscles also need a signal proportional to desired eye position as well as velocity. Since eye-movement commands enter the caudal pons as eye-velocity commands. the necessary eye-position command is obtained by integrating the velocity signals"
The question of course is how we know that said inputs are velocity signals. Robinson did not of course just pull it out of thin air:
"Recordings in the brain stem [...] show that neurons in the region of the neural integrator have a variety of background firing rates, all carry some eye-velocity signal as well as the eye-position signal, and carry the former with different strengths depending on the type of eye movement being made." (1985)
In other words, there was a strong correlation between different eye and head movements on one side, and firing patterns of neurons in the brain stem (and elsewhere) on the other side.
And now the punch line: 
"The eye-movement control system contains a neural network that integrates with respect to time... a neural integrator must exist to convert the velocity to the position command ." (my emphasis)
The argumentation is obviously sound, especially in light of the following:
"Recordings from single neurons indicate that oculomotor signals are combined and transmitted in a diverse manner. Each neuron carries components of signals with apparently randomly chosen weights so that each total signal is distributed over the population." (1989) But, luckily, the apparent randomness of the readings does not last.
"When, however, one records from neurons in the region where these signals are combined and integrated [...] to be passed on to the motoneurons [...], one finds a diversity of signal combinations indicating a truly distributed representation."
The mathematical expression of this change can be represented by very simple equations which I will accept at face value. 
Conclusion: "Thus, the velocity commands spread out over the premotor population and then converge on the motoneurons to produce the correct final output. The seemingly random distribution of the signal components suggests that the network began with a partially randomized set of synaptic weights and then organized itself into whatever pattern of connectivity got the job done based on trial and error."

I must admit that this is quite an achievement. One advantage of Robinson's approach is its clarity. Especially when compared to the cryptic and obscure style of a David Marr who was one of the first to describe a part of the brain, the cerebellum, in mathematical terms.
Nonetheless, Marr sounded more convincing exactly because he was so obscure! It was not clear that the mathematical model he presented was in fact a figment of his own imagination,  so that many people still think that he actually was describing a real cerebellum. But Marr will have to wait, let us go back to Robinson.
Once we accept that the signals sent by the different ocular systems are velocity signals, we have in fact already submitted to Robinson's logic and can only agree with him. 
I will not repeat here my arguments concerning firing patterns as data only accessible to an external observer (see my thread Neurons, Action Potential and the Brain,  ). They should, by themselves, be sufficient to at least seriously reduce the plausibility of Robinson's approach. 

I will attempt to present more specific arguments concerning this special case.

Let me recapitulate:

Neuronal Processes-------------------Neural Model
1) velocity commands----------------- Input Layer
2) integration by interneurons------- Hidden Layer
3) muscular movements-------------- Output Layer

Normally, a model is a simplified version of a more complex real process. What we have here are in fact two phenomena which look like mirror images of each other.
It would seem that Robinson's interpretation of the neural and muscular processes involved has already turned them into an abstract model. One can therefore wonder which is supposed to prove the plausibility of which. The way it is, Robinson could, with good reason, claim that the systems of eye movements prove the plausibility of such a neural network. In fact, such an approach would have been quite understandable when connectionism was just debuting. A period in which Robinson was already a very active researcher. Now the tables have turned, and neural networks are seen as a legitimate way to prove what cannot be proven empirically. Make enough models that look like the real things, and then you may demand of the real things to look like the models! 
Robinson seems certainly aware of this dilemma, in (1992) he states very clearly:
"Finally. how does one test a model network such as that proposed for the neural integrator? It involves the microcircuitry with which small sets of circumscribed cells talk to each other and process signals. The technology is not yet available to allow us to answer this question. I know of no real, successful examples. This. I believe. is a true roadblock in neurophysiology. If we cannot solve it. we must forever be content to describe what cell groups do but not how they do it."
This refers of course to what is happening with the interneurons. What does a neural version of mathematical integration look like? Nobody knows, but "a neural integrator must exist"!
This is probably the only conclusion possible once you have accepted the premises.
We will look at those, but first let us consider the current issue a little bit longer.

It is the problem of the hidden layer that shines in all its glory. What Robinson is in fact doing is explaining a mystery (the neural integration of random inputs), by another mystery.
What happens really in the Hidden layer?
Weights are adjusted [Ah, our homunculus is finally back, I was starting to miss him! I think I will call him George, after the butler I never had.] until they give the desired output. Why some weights have to be strengthened and others weakened? Nobody knows. And why is the final configuration the right one? Nobody knows either. You just can see that sometimes the output is too high, other times too low. It all depends on The Equation . But where does this equation come from? From the analysis of the real processes of course! It is not something the researcher just likes because of its beauty and elegance!
So, the equation comes from a mathematical analysis of the eye movement systems, which means that this analysis is in fact the sole justification of the neural network ?
Plus, the researcher is allowed to keep changing the weights until he gets what he wants? I want one of those!

Let us now look at the premises.
The inputs are supposed to be random, distributed, then finally integrated. What does random mean in this context?
I have already treated, very briefly (see the entry Statistics and Probabilities in  The Brain: some problematic concepts ), of the relationship of statistics as a discipline, and as a property of neural processes. Let us see if we can make those general remarks also a little bit more specific

Randomness and statistics in the brain 
One fundamental belief of modern science is the dependability of natural phenomena, at least beyond the sub-particle level. Without this physical determinism, science would be impossible. When a physicist talks of the randomness of, say, the movements of the water molecules in your kettle, he is in fact saying two things:
1) the individual movements cannot be predicted one by one, but only as a collective, a system;
2) it does not matter that we cannot predict these individual movements.

What he is certainly not saying is:
3) the movements of the water molecules are themselves random.

Randomness is, according to physicists, except Bohr and Heisenberg, a property of our knowledge, not of Nature .
When Robinson declares neural inputs as random, he is stepping beyond the bounds of plausibility. Unless he simply means that, as a researcher, he is compelled to consider them as such. In which case, his analysis of real neural processes is in fact based on a model of neural reality which, as justifiable as it may be, cannot and should not be confused with reality itself.

In summary, anyway you look at it, Robinson is comparing two models he has himself created. No wonder they seem to fit so perfectly!

The Brain: some problematic concepts
Headless chicken
No, I don't mean Mike, the cock who survived decapitation for 18 months! (on many sites on the web). The "involuntary" (?) running movements which has been witnessed so often reminds me very much of Brown's "narcosis progression". These phenomena have created a few trends in Neuroland, one of them being spinal memory, and even spinal learning. Patients suffering of all forms of spinal paralysis are given hope that one day they will walk again, thanks to these wonderful properties of the spinal cord. I can only hope for them that the researchers know what they are talking about!
[see for instance Edgerton and Roy "Activity-Dependent Plasticity of Spinal Locomotion: Implications for Sensory Processing", 2009.]

I have no way of confirming or rejecting the different claims that have been made, concerning the progress in locomotion shown by rats or other lab animals, and I would certainly not want to take people's hope of a cure away. So, I will refrain from any comment on this point and concentrate on the likeness between running headless and narcosis progression.
Both kind of reactions look, to me, typically like a flight reflex. They are also apparently the result of commands given before or during the decapitation and/or settling of narcosis. Both groups of commands survive the (fatal) human intervention. Which raises the question indeed of some kind of memory, the minimal form being the injured state of neurons in one case, their overexcited state in the other.
Another phenomenon that resembles both these peculiar reactions, is that of the so-called phantom pains.
All three have in common their persistence (long) after the brain should have ceased to stimulate organs it could not reach, for whatever reason, anymore.
I do not consider the following arguments as being particularly strong, but it seems to me that any explanation of one of these phenomena should at least shed some light on the other two. 
That is certainly not the case of a "Central Pattern generator". Even if deemed plausible, it would not explain the case of phantom pains or of a headless chicken without some artificial adaptation of the concept.
- The headless chicken for instance is still in possession of its limbs which can be assumed to relay stimuli, if not sensations, to its muscles.
- Phantom pains, do not seem to subside with time, and there are no movement involved.
- Furthermore, this type of pain demands a functional brain, so it would seem in fact to be quite the opposite of the other two which show functional limbs with no brain.
- Also, there is the matter of the sensations involved in one case, and the putative lack thereof in the other cases.
I say "putative", because we have no way of knowing where the sensations reside, or even whether we should think of them as a localized or distributed phenomenon! Our view of the brain as the central locus of sensation may very well appear as strange to the people of the next millennium, as the ancient conception of the heart as that of emotions, or liver as that of bravery, seems to us now.

Ramachandran's boxes ( The Tell-Tale Brain", ch.1) that helped his patients deal with phantom pains is an indication that the phenomenon of phantom pain has also a mental component. But then, we could say that of every ailment where a placebo effect would seem to help.

Let it be clear that I am not interested in the movements themselves, and certainly not in their "rhythmicity". The fact that they exist at all is what I find most interesting. I have no difficulty considering the rhythmic movements elicited by the researcher'stimuli as mere reflexes, and find them therefore really insignificant.
I also do not find it strange that the 'sudden division" of the spinal cord elicits such reactions in the animal. However sudden, such an intervention cannot happen without some kind of reaction. 
In all cases, it is the (relative) persistence of the movements or the sensations that constitutes a puzzle. These phenomena all tell us something about sensation and memory. I am just not sure what it is they are telling us!
Also, do not forget that even in the deepest of narcosis, living creatures still keep breathing!

The Brain: some problematic concepts
Central pattern generator (CPG)
Sherrington ("The Integrative Action of The Nervous System", 1906) vs. Brown  "on the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression, and a theory of the evolution of function in the nervous system", 1914.

The theme of CPG's has produced hundred of articles concerning the existence of rhythmic movements patterns in different species, and, as a sign of the times, many more articles demonstrating neural or robotic models of different types of biologically inspired movement patterns, from locomotion to swimming and flying.
I will restrict myself to the instigators of such a commotion since I am convinced, notwithstanding the great progress in technical and biological details, the problem is still formulated in the same terms as it was a century ago.

Let me first start with the remark that T.Graham Brown was a close collaborator of Sherrington, and that we are witnessing a friendly contest where the mentor is certainly not envious or spiteful of the progress and intellectual independence of his protégé. An admirable attitude that does honor to a great scientist.
Second, the controversy is built on a consensus. Both men believe in Cajal's Neuron Theory, as opposed to the Reticular Theory advocated by Golgi, and they also share a common trust in Darwin's Evolution Theory. They also both see their own results reinforced by their interpretation of evolution. A very familiar pattern that settled apparently very early in the mind of intellectuals: first find an explanation to your liking, then show, or just affirm, that Evolution would have done the same thing!

The object of contest would seems quite frivolous to a non-specialist, the question being if the rhythmic locomotion movements observed in intact and anesthetized animals are the result of a chain of reflexes (Sherrington), or that of a central phenomenon, independently of any sensory stimulation (Brown), what later came to be known as a Central Pattern Generator or CPG. 

Here is how MacKay-Lyons expresses it: "In the mid-1980s, a critical paradigm shift occurred in the field of motor control—a shift away from the belief that reflexes were the bases for motor behavior and toward the belief of the motor program as the fundamental substrate underlying motor behavior." (in "Central Pattern Generation of Locomotion: A Review of the Evidence", 2002). Her review of the concept is quite representative of the current ideas concerning CPG's. She is very clear as far as the significance of those years of research:
"Today, the existence of networks of nerve cells producing specific, rhythmic movements, without conscious effort and without the aid of peripheral afferent feedback, is indisputable for a large number of vertebrates". (my emphasis)

How did it all start?

Sherrington was not only a fervent believer in the Neuron Doctrine, he was also a thinker that did not fear to go beyond the immediate results of his experiments. He had a definitive view of the brain as a whole, a view which seemed vindicated by Pavlov and his research on reflexes a few years later. Sherrington was already moving beyond the idea of a brain made of discrete units, a conception that was far from being generally accepted by the scientific community at that time, and looking for the next best thing he had found in his work; the reflex as the fundamental building unit in the brain. 
I will not get into the details of his analysis  and introduce Brown's objections right away.[for those looking for a nice and clear summary of (1906), see Burke's "Sir Charles Sherrington's The integrative action of the nervous system: a centenary appreciation", 2007. Burke is by the way also a believer in CPG's, which is a tad better than believing in UFO's, so his appreciation of Sherrington's (1906) is not completely void of ambiguity.] 

Brown was the first to take seriously what he called "narcosis progression", the involuntary movements that anesthetized animals showed during narcosis. Instead of considering this phenomenon as a mere curiosity, he set out to investigate it methodically. The results he obtained were in flagrant opposition of some fundamental assumptions that he had shared with his mentor, Sherrington.

Here are the main points, freed from any superfluous technical detail:
1) The effects of narcosis on the involuntary, and more importantly, rhythmic, movements of the lab animals were apparently dependent on how intense or deep the narcosis was.
["In this phenomenon walking, running, or galloping movements may occur in all four limbs."]
Very deep narcosis tended to neutralize the movements, while light narcosis did not hamper them.
2) Those involuntary movements persisted even in the absence of any other reflexes.
3) The "sudden division" or cutting of the spinal cord in one sharp movement produced also those progression movements, even when all other reflexes had been already abolished.
4) The decerebration and deafferentation (destroying of the anatomical parts responsible for the relaying of extero- and proprio-ceptive stimuli) had no apparent effect on narcosis progression.
5) Stimulation of both limbs after decerebration and deafferentation produced (almost) the same rhythmic movements as in the intact animal.

This last point plays even nowadays a central role in the description of a fundamental property attributed to CPG's. Some quotes:
- "In vertebrates, the generation of rhythmic activity in hindlimb muscles, locomotor activity, does not require sensory input but is generated by central pattern generator networks (CPG's). (Grillner " Biological Pattern Generation: The Cellular and Computational Logic of Networks in Motion", 2006);
- "The most convincing evidence that neural networks in the spinal cord are able to produce rhythmic output was obtained by experiments in which such output is generated although movement related afferent input is completely eliminated through blocking of the movement." (Duysens and Van de Crommert "Neural control of locomotion; Part 1: The central pattern generator from cats to humans", 1998);
- "Central pattern generators are neuronal circuits that when activated can produce rhythmic motor patterns such as walking, breathing, flying, and swimming in the absence of sensory or descending inputs that carry specific timing information." (Marder and Bucher, 2001).

We can see that Brown's interpretation of "narcosis" progression is still maintained in the modern version of this kind of involuntary rhythmic movements. The need to prove that neither descending stimuli nor ascending input played a role in the production of these movements is gruesomely symbolized not only by the decerebration and the deafferentation procedures, but also by the amputation of the paw of the animal. Apparently, such radical surgery could then leave no room for arguments: there was no sensory stimulation possible, since there was no paw to stimulate!

The precision that Marder and Bucher bring in their quote, "timing information", is also quite indicative of the assumptions behind the concept of CPG's. Sherrington's view of rhythmic patterns as the result of a chain of reflexes is thrown aside, because everybody is convinced that sensory (and afferent) input has been eliminated.
In other words, the amputation of an organ means the end of its neuronal influence! 
The fact that it is still possible to stimulate the concerned muscles via the surviving neurons is apparently irrelevant. This rejection indicates a very specific view of a neurone as the carrier of neural codes like timing or sensory information. The idea that these neurons are nothing else but electrical conduits that make reflexes possible, the view shared by Sherrington and Pavlov, is considered as simplistic and obsolete.
The reader will understand that I consider this "progress" as in fact a regretful waste of time and energy. The student should have better listened to his master!

The Brain: some problematic concepts
The Metaphysics of Sensation 
[For a more formal treatment of the logical considerations presented here, in the context of retinal change, in an informal, intuitive way, any logic textbook would do.
I would like to mention this particular quote of a logician who published first the idea (discovered 30 years earlier by Pierce) that a reduction of the logic particles (Not, And, OR) to only one (complex) symbol, Nand (or, as would later appear, NOR) was a real possibility:
"Since not only in special deductive systems but even in the foundations of logic not all propositions can be proved and not all non-propositional entities can be defined, some logical constants must be primitive, that is, either unproved or undefined." (Sheffer "A set of five independent postulates for Boolean Algebras, with application to logical constants", 1913). Sheffer goes on with the mention of Whitehead "Principia Mathematica" where a list of such primitives can be found. ("Primitive Ideas and Propositions" in part I, section A, p.95 in the 1910 edition.)
I would also like to note that there is no physical device for the implication sign, nor for the language symbols of conditionality "IF.. Then".
A proposition like
"If A >B Then C"  necessarily become the equivalent of a logic gate with two input lines A and B, and an output line C with the following characteristics:
A (with a certain value); B (with a certain value); C (with a certain value).
The conditional character is a property brought in by the programmer and translated in branching instructions. C with a certain value will activate one instruction set, while another value will activate another set.
Its physical equivalent is that of a switch.
"If A>B Then C" becomes then two distinct sets of instructions (or actions) which have to be defined before the branching instruction becomes active.
There is no one-to-one-correspondence between the logical representation of a proposition, and its physical translation in logic gates. This peculiar relation between the logical and the physical, even if obvious, is usually neglected. The whole argumentation of The Language of Thought Hypothesis is, I would say, built on the confusion between those two dimensions.
Also, while we could compare the "logical state" of an And-gate to the situation of Schrödinger's cat, that is having both states simultaneously, the physical gate will always contain a single, definite state.
Both these fact should at least teach us caution. It is not enough to have a logical representation of a certain process, one must make certain that is also realizable with physical means. Especially in the case of neural processes where every assumption about neurons creates in fact a new possible brain which needs not resemble the real one in any way. A "logically possible" brain is like the philosopher's "metaphysically possible" brain. It is a fiction that can prove to be very fertile, or an obstacle to progress.
I would like to close these preliminary remarks on the following note: what logicians call "logical constants" look suspiciously like what I call "sensations"!]

Thinking about the problem of how Change is detected in the retina, I realized that, once again, my desire to find clear-cut solutions in the physical sphere might have made me guilty of wishful thinking. But fundamental problems do not allow you to play the three monkeys for ever. So here is the issue in a nutshell.

How doe the brain decide that something has changed?
Do we need an on-off gate to explain logical or chemical decisions?
How can the brain decide whether the next input is different or the same without neural codes?

The only clear answer I could finally come up with was:

It feels different!

Even die-hard materialists have no compunction admitting that physical stimuli give rise to non-physical sensations. The problem is the so-called causal efficacy of these sensations. The idea that there could be non-physical causes, that our sensations could somehow have an effect in the world, is metaphysically suspect. Very few thinkers outside of spiritualist or theological circles would actually advocate such a step. Materialism as understood by the Powers that Be simply forbids it.

Nonetheless, the question remains.
Can we think of a (neural) algorithm that could explain change?
It seems very simple at first.
1) input A has effect a, which changes the receptor, making it insensitive to input A.
2) Input B has effect b, which changes the interceptor, making it insensitive to input B.

But it gets very complicated really fast.

3) Is it the same receptor, or two different ones? If it is the same one, how is it made insensitive? Is it a chemical process? If it is, is it reversible? I mean, it has to be, doesn't it?
4) Two or more receptors: how can the input choose between them?

How do computers solve this conundrum? By taking into consideration every possible type of input. Furthermore, there are rules for setting bits on and off, and each bit has two possible reactions as a consequence.
Could then the same rules be applicable to the brain? Let us see.
You need arithmetic and logic rules. There is no reason why it should not be possible to implement them neurally. Except that we would not know when to apply them for sure. That might be solvable in the long run. But more importantly...
Are neurons binary transistors, or should we consider them as containing much more information? If yes, how many bits will each neuron represent?
But then, a neuron will not only need to store information, it also must be able to react to it. Which means that every neuron will in fact be an independent processor capable of interpreting the data it receives, and deciding which data to send. After all, if there is something we have learned about the brain, is that there is no Central Processing Unit. Such a "non-Cartesian pineal gland" would certainly not remain unseen!
These mini-processors will in turn have to be in possession of the same arithmetic and logical rules we mentioned earlier.
To avoid an infinite regress, we will have to call the help of a Divine Power or of Evolution. Which, logically, as far as the structure of argumentation is concerned, amounts to the same. If we knew how Evolution did it, we would not need to ask for its help, we would just imitate it.

We can of course decide that neurons do not need to be that intelligent, and that intelligence is in fact an emergent property of many neurons. I find this conception even more puzzling than, with all due respect, the Holy Trinity; and, in view of my cultural background, I do not say that lightly.

Whatever mystical flavor one chooses, one thing is certain: there is no simple implementation of such an elementary concept as change (or identity for that matter).
Rests what I said earlier about all possible types of input. After all, is that not what the brain is already doing with all its sensory and proprioceptive organs? Reacting only to input that has previously been defined?
Absolutely. In fact, we only need to prove one simple thing to end this metaphysical discussion for all times. (See also  the entry Interneurons as Nano computers  in my thread Neurons, Action Potential and the Brain, )
We only need to prove that neurons react differently to different stimuli. That is, as a matter of fact, the only way to build a logic gate [in fact a series of logic gates]. It must be possible for a neuron to react with either A or B. Otherwise, we would only be deferring the decision to another neuron further down the brain.
The empirical proof should not be very difficult. Take a random diverging neuron, stimulate it with different impulse patterns, and see if it ever chooses a different target instead of all of them.
Wait, what about inhibition? What about it? When is it supposed to come in action? How could inhibitory processes decide which neuron to stop and which to let through? Still,it does happen, doesn't it?
Yes, but we are talking about the fact that a single neuron can contain in itself the codes for different actions. Inhibition does not necessarily need that. It could apply to a neuron which would stimulate different neurons indiscriminately. Inhibition would be something like a dam, stopping or redirecting the water flow. The neurons we are talking about, if they are to count as logic gates, should be able to open or close the dam themselves. Otherwise, we would have only, once again, postponed the decision to branch.

Is it the Ultimate Test? I am not a prophet, so I would not know. It does indicate the possibility of a distinction between systems, be they biological, carbon or silicon-based, and what I can only call other kind of (physical or not physical) systems.

As you see, we are, again and again, thrown back against the metaphysical wall we desperately seek to avoid. We need a system that can explain how decisions about Identity, Change, and other fundamental concepts, are possible when it is obviously impossible to implement such concepts in a physical substrate.

For me, it means I'm afraid that I better not hold my breath waiting for an empirical proof of how saccades detect change.

I think Science should embrace this challenge and look for ways to wrestle it from the hands of all kinds of mystiques, especially those who claim to hold its principles in high regard.

The Brain: some problematic concepts
"What does it feel like not to feel?"
 I would like to thank Jana Helms for her candid remarks that do us all honor by the insight they give us in her very private sensations as a spinal patient. Here are some quotes that I find enlightening in that they correct a much too sterile and clinical view of sensations as described in scientific articles. Very often, we get the impression that it is a question of all-or-nothing. Either you have a working spinal cord, and you have normal sensations. Or your spinal cord is damaged, and in that case you have none. Jana Helms brings a much needed precision to this simplistic dichotomy, which also brings the theme of sensation, and the experience of a spinal patient, closer to home.

 “ There’s burning in my legs and lots of other sensations, but none that I could really describe right now.”
" Some days are more annoying than others, and it feels like an itch you can’t scratch. Think of the first few seconds when you move your arm or leg after it’s fallen asleep - it’s that feeling, magnified times infinity."
"If you touched my stomach or kicked my leg I can tell by pressure, not skin-to-skin touch and mentally, it can be disturbing."
"I can watch a needle go into my leg and not feel a thing, yet if you grabbed my leg and squeezed, the change of blood flow would let me know which leg you touched."

The Brain: some problematic concepts
The significance of Weber's Law

[Weber "Der Tastsinn und das Gemeingefühl" , 1851. Translated as "E.H.Weber on the tactile senses", 1996. The translation is out of print.]

The main point of the "law" is to build a bridge between (private) sensations and (objective) measurements. Weber did a lot of "experiments" (their informal character would be ground for rejection today) on himself, family members and collaborators. He was particularly interested in the smallest distance between two stimulated points (of the skin) that could still be felt as two different sensations. In modern terms, he was trying to map the receptors'density of an organ based on the sensations felt by the people he experimented on.
These measurements could be of course expressed in numbers, and as such they could become the objects themselves of further mathematical investigation. One could for instance find out that certain groups of numbers had a specific relationship, like a logarithmic scale, or a differential equation.
One could, on the basis of these formulas predict the reaction of people to, for instance, a certain increase of weight. The researcher could, this way, be sure whether the increase would be felt or not.
The exact nature of the formulas and the mathematical means employed to get them is utterly irrelevant except to mathematicians. Even psycho-physicists need only to know when and how to use them.
There is therefore a direct link possible between sensations and objective data. These data do not tell us anything about the locus of the sensations involved, nor about their nature. Researchers just use average declarations or reactions by humans and animals to refine the quantitative properties of this link.
Such a link is a means of investigation and prediction for the researcher.
The question is therefore, can we attribute those quantitative properties, and their mathematical relationships, to the sensations themselves? In fact, we could ask the same question to physicists regarding the natural objects and phenomena they study. 
Does a falling object have to measure gravity first to react accordingly? After all, how else would it know when to accelerate and how much to accelerate?
It is indicative of our current view of physical processes, that such a remark would never be taken seriously even by laymen with no understanding of the laws of physics whatsoever.
This same question would have been taken very seriously in ancient times.
The same way we take very seriously the question how sensory neurons can react differently to different stimuli, and how that may be caused by computations made by the same neurons.
In other words, if we treated falling objects the way we treat neurons, we would be expecting those objects to be able to compute gravity at any moment before reacting accordingly!

The Brain: some problematic concepts
Learning from your mistakes: Neural networks and their significance

Let us look at a concept of neuro-modeling, that of feedback.
There are enough situations conceivable in which the information we need to be able to correct an error can be very complex. In fact, most real-life situations are much too complex for any neural network model. So, we have to make them more tractable to analysis.

The pattern is easy enough:

Action -evaluation- Action

The last action being an adapted version of the previous one, or the final output.
Neural models, of necessity, must make abstraction of the complexity of real life situations, and present the evaluation processes as a single, mathematical process: the adjusting of weights until the right output has been found. 
Unsupervised learning is often seen at the summum of artificial intelligence, a goal to attain for each neural network model. What we must realize is that unsupervised learning is nothing more but the automation of the human teacher's behavior. It does not eliminate the homunculus [sorry George, I really have to say this], it just hides it behind a mathematical or computer program.
We could of course always object that that goes for the brain as well, since it too is dependent on all kinds of biological and chemical processes. But that would be pertinently false. 
A homunculus is present when a solution to a local problem can only be solved by a whole brain or (living) organism.  In our case, the adjustment of weights cannot be done automatically without assuming functions that are not and could not be present in the network, but only in a complete system.
There is no algorithm possible to adjust weights without the help of a homunculus, unless we believe in spontaneous generation, or its modern equivalent, Emergence. It would be like trying to solve the halting problem with a computer program.

Does that constitute a decisive distinction between computers and living beings? I am afraid it does not. We should take Dennett's arguments concerning a robot that would not only survive, but evolve in time, very seriously. At least, as a metaphysical possibility. Practically, we have no idea how to build such a machine until now. Still, we cannot deny it plausibility on practical grounds only, nor should we try.
Let me just say that, were such a robot to come into being, it would then become its own homunculus, just like George is an indissociable part of us [yeah yeah, I love you too].

Seen as we are still so very far from such a Marvin [the technological marvel in Adam's "The Hitch-Hikers' Guide to the Galaxy"], the homunculus problem for neural networks remains acute. Neural networks, whether controlled directly by a "teacher", or unsupervised, can never even come close to neural processes in a biological brain. The necessity to translate each decision in "weights adjustment" is too limiting, even if it could be justified on general grounds: we can always speak of "more red", "more pain", "more love", etc.

Furthermore, like shown elsewhere (see above the entry Neural Integrators), the way we analyze a situation determines the nature of the network. In fact, we are saying the same thing twice, once verbally, the second time with so-called artificial neurons (in fact, computer programs). But we already knew that many things we do, or maybe even feel, can be expressed in computer programs. In other words, neural networks, just like their classic brothers in AI-land, only prove how smart we are. To think that they somehow could depict what is really happening in the brain is mere prejudice. At least, until we have built an independent (or as independent as humans can be) learning and behaving machine. And even then, we could not be sure that the brain does not do it differently: the so-called multi-realizability principle.

Neural networks therefore are built on local principles and mechanisms, while brains or intelligent robots are built on the coordination of many "localities" which support each other.

That is still not a reason to attribute to local networks a temporary visa while waiting for the final construction of all necessary parts or modules. We have no way of knowing, when building a local solution with George's help, whether it is what it would look like in the final system. 

We can therefore never draw any conclusion from such a network to the neural processes that take place in the brain.


The Brain: some problematic concepts
Weber and Biological Clocks

Imagine that you somehow were convinced that the brain had an internal measuring lint that made it possible for an animal to take decisions based on the (relative) size of, or distance between objects.

[I noticed that city gulls, and city birds in general, react quite differently to a human approach. Some would fly away as soon as you came closer than 3 meters, others were more daring and waited until the last minute. I am sure that they all, somehow, "used a distance mechanism" that directed their reactions. Don't you think so?]

Weber's Law would be certainly a very good place to start. After all, it already gives you the possibility of correlating sensation with size, the same way it correlates weights with sensations of "heavier" or "lighter". Sensations of size, as shown by Weber (1851), can be very accurately given by the skin (the tong in particular is very sensitive to minimal differences of size or distance between two tactile sensations), the eyes being the finest instrument of all. Now all you need to prove the existence of such an internal measuring lint would be quantitative data that somehow correlate decisions with size or distance . Being a smart researcher, you soon come up with all kinds of experiments that show just that. Every time a lab animal makes a decision, you are able to show certain mathematical patterns that prove beyond a doubt that the animal had to use some kind of measuring lint for you to discover such correlations. They are just too regular to be considered as random! Mind you, you were not lucky enough to stumble on such a neural measuring lint, but that is only a matter of time. After all, like I just said, the mathematical proofs are overwhelming. It is something like the prediction, on pure theoretical and mathematical grounds, of the Higgs' particle! Less dramatic, many chemical elements had their existence predicted long before they were discovered, also purely on theoretical grounds having to do with the Table of Elements.
We should therefore take those predictions very seriously. If mathematical analysis predicts the existence of a measuring lint, then that is certainly smoke pointing to some kind of fire! The question of course is whether what the calculations are pointing at must necessarily take the form of a (neural) measuring lint. 
As far as I know, there is no theory of an internal measuring lint in the brain, even if there are many theories concerning space perception.
There are though certainly quite a few theories about internal time clocks in the brain. I will not consider any one of them in particular, since the principle they rely on is fundamentally the same: "the old idea that some aspects of timing in humans [and animals] depend on an “internal clock”". (Wearden  "Applying the scalar timing model to human time psychology: Progress and challenges" Chapter 2 of "Time and mind II: information processing perspectives"  Helfrich et al (eds), 2003). [see also Hopson  " General Learning Models: Timing without a Clock", in "Functional and Neural Mechanisms of Internal Timing", by Meck, ed., 2003. Hobson had a very promising start, sounding quite critical of the whole idea : "This metaphor is quite natural, since a stopwatch is a mechanical solution to the same problem as our biological sense of time.". But then it appeared that all he wanted to do was to supersede the different approaches of neural timing and replace them with an all encompassing neural network!]
I will not tire you with the mathematical arguments that convinced so many researchers that the problem was not the existence of biological clocks, with their pacemakers, oscillators, accumulators and what not, but only one of the exact mathematical description of their functions. 
There is, after almost 40 years of research (counting from Gibbon's "Scalar expectancy theory and Weber’s law in animal timing", 1977, that everybody agrees signaled the start of the frenzy), no clue whatsoever pointing to such a neural mechanism, and its discovery is still as improbable as ever.
So, what can we make out of all this?
My personal conviction is that the mathematical proofs do not and cannot show the existence of a specific neural mechanism. I would like to take Hopson's remark very seriously, and point out that what the researchers are expecting to find is most certainly influenced by what they already know, or more importantly, the neural mechanism they hope to find is the kind of device they would build to solve the problem as they see it.
So yes, there are certainly mechanisms in the brain that involve time and timing, but the solution cannot be the one automatically favored by scientists. They must realize once and for all that the way they would solve a problem is in fact a very good indication of how their own brains most certainly would not. It sounds, as always, very paradoxical: our brain can learn mathematics and statistics, and how to use them, but it cannot learn these disciplines with the help of neuro-mathematics or neuro-statistics, because such innate disciplines are, as such, very improbable.
Time, maybe more than anything else, is a fundamental sensation, and like Weber so convincingly showed us, we can learn much about the way our brain uses it with the help of scientific observations. But trying to replace sensation by calculation is a very serious mistake, not only metaphysically, but most certainly, scientifically. After all, these calculations would have been impossible if not for the sensations that gave them birth in the first place.

The Brain: some problematic concepts
Neuro-mathematics and Neuro-statistics
I have, until now, given as many arguments as I could think of to convince you that such innate abilities are improbable. But suppose I am wrong? After all, we are in the middle of a metaphysical debate which have per definition no end and which already involved Plato and his immutable forms. In a more modern version Piaget and Chomsky seem to personify both positions: innate structures (Chomsky), or a maturing process allied to experience. 
[I do not think that there is a single thinker that defends a pure empirical stance, in which organisms learn "everything" through experience. They all admit of a certain level of genetic baggage. As far as I know, even Locke's tabula rasa was more of a polemical concept than an absolute.]

What then?

As I see it, it would not change much to my analysis. Even assuming the existence of neuro-mathematics and neuro-statitsitics, we would still have to show how these  disciplines, as we know them, could be translated in neural processes.
So my advice would be: do not waste time in showing that the brain is using mathematical or statistical formulas, try to emulate them in neural networks worthy of that name.
By the way, adjusting weights just won't cut it!
You will have to show in details, without George looking over your shoulder, how real neurons, and not a fictive version of them, can add, subtract, integrate, calculate an average, etc.

I would find it exhilarating if you succeeded, and would gladly acknowledge my defeat!

The Brain: some problematic concepts
The schizophrenia of Neuroscience 
Two opposing paradigms are used indifferently while most, if not all authors seem to have no compunction in using the results of both paradigms.
Hebb's paradigm: is essentially neutral as far as the origin of stimulation is concerned. Electrical stimulation is considered as the sole relevant form of neural stimulation between neurons. [It should more accurately be called Sherrington's paradigm, since even the Hebbian paradigm is but a more precise view of the former.]
Brown's paradigm: as shown above, for Brown, the nature of the stimulation was paramount. The electrical character was considered as secondary to the origin of the stimulation and its putative nature.
It would be interesting to analyze influential authors and the way they deal with this methodological dilemma. The overwhelming majority of in vitro experiments rely on the Hebbian view, while many neuroscientific theories rely heavily on the Brownian paradigm.
An example would be O'Keefe's theory of spatial memory which uses results from Kandel and the likes on one hand (Hebbian paradigm), and the specific value of the neurons as in the so-called place cells (Brownian paradigm). [His Nobel lecture , 2014, is a very good summary of his work.]

These two approaches are incompatible. Using the results of both should therefore lead to grave theoretical and practical inconsistencies. This is certainly a new field worth exploring.

The Brain: some problematic concepts
Plausible Neural Networks?

Mastebroek and Vos (eds) "Plausible Neural Networks for Biological Modelling", 2001.
The first part of the book is concerned more with the empirical and theoretical analysis of neurons that can be used for the construction of the actual neural networks that are presented in the second part.

"Integrate and Fire Model"
in ch,2 Gerstner seems to have anticipated my challenge to show how neurons can perform mathematical operations. He even has a very nice diagram in two parts (fig.2,4, p.29). One part showing two neurons synapsing with each other. The other part is of course the most important. It translates neuronal functions in formal devices that compute specific parts of a function.
The mathematical functions, (formulas 2.6 to 2.8) are not really important as such. What matters is the fact they can be put into a clear diagram containing a resistance element R, a capacitance C and the necessary voltage, input and output symbols. Because there is a clear link with certain biological properties of neurons (membrane resistance, action potential, etc), the diagram seems to possess a high degree of biological plausibility.
Let us look at it more closely.
We will probably notice right away that all the processes described are of an electrical nature. Which should not be a problem. After all, when describing the electrical properties of a circuit, we do not need to specify the kind of molecules a battery contains. Its voltage and resistance, and other quantitative figures, tell us all we need to know.
But such a diagram has bigger pretensions. It aims not only at explaining electrical phenomena within and between neurons, but also at providing a clear image of the behavior of a neuron when it it gets different inputs from multiple neurons: the integrating part, that again, deserves it own formulas or equations (2.9 to 2.11).
What could that possibly mean?

One neuron getting multiple input and reacting in a certain way. 
Let us not forget that a neuronal input produces, directly or indirectly, either a sensory sensation, a memory, thought or emotion, or a bodily reaction (visceral or muscular). The same holds for any neuronal output.
The equations that our author overwhelms us with treat all neurons indifferently, as random elements whose value is determined by the equations themselves.
Even if the electrical analysis is right, it still does not tell us anything worth knowing concerning the functions specific neurons are fulfilling when reacting exactly as predicted by the formulas. Which means that we are in fact facing completely useless computations that do not help us in any way at better understanding what is happening in the brain.
Furthermore, nor the diagram nor the formulas relate really to biological neurons.

The diagram is that of an electrical circuit that can produce certain electrical results. What would be more interesting is showing how the biological processes in a neuron fulfill all the functions described in the diagram. But all the author has done is translate his verbal description into a more scientific sounding diagram and formulas. Here is his initial description:
"A neuron is surrounded by its cell membrane. Ions may pass through the membrane at pores or specific channels which may be open or closed. A rather simple picture of the electrical properties of a cell is the following.
Close to the inactive rest state the neuron is characterized by some resistance R in parallel with some capacitance C. The factor RC= [some squiggles] defines the membrane time constant of the neuron. the voltage u will be measured with respect to the neuronal resting potential. If the neuron is stimulated by some current I, the voltage u rises according to [formula 2.6]."
As you can see, the behavior and properties of neurons have been brought down to those of elements in an electrical circuit. And that is all the diagram does and can show.
Getting to the integrating ability of neurons, the description changes accordingly:
"In a real cortical network the driving current is the synaptic input which arises as a result of the arrival of spikes from other neurons. Let us suppose that a spike of a presynaptic neuron j which was fired at time [squiggles] evokes some current [squiggles] at the synapse connecting neuron j to neuron i. The factor [squiggles] determines the amplitude of the current pulse and will be called the synaptic efficacy [ah! That one!]. The function [squiggles] describes the time course of the synaptic current. If neuron i receives input from several presynaptic neurons j, the total input current to neuron i is [formula 2.9]."
Again, nothing in this description goes beyond that of an electrical circuit, and the mention of "a real cortical network" is in no way justified.

As is often the case, a neural process is interpreted in such a way as to make the use of mathematic formulas possible, and then those same formulas are used as the indisputable proofs of the plausibility of the explanation.
Whatever Gerstner did with his voodoo incantations (all the beautiful equations), he certainly did not explain how neurons could perform the mathematical integration of multiple inputs, nor did he give us a clue as to what these electrical currents could possibly mean to a functioning brain.

The Brain: some problematic concepts
Could Aliens understand our computers?
Image a race of beings from another galaxy, so different from us as far as their sensory apparatus is concerned, that even bats, and what it feels like to be one, would look familiar to us. There is no reason to imagine other natural laws, and so we could even assume that their science, and mathematics, would be comprehensible to us as far as their manipulations of physical processes are concerned.
Let us give these Aliens ["who are you calling Aliens?", would they say.] a CPU and see what they can make out of it. To make it even simpler, let us give them a very old 8 bits processor like the one used in the ancient Commodore 64.
Here are some bets I would care to place:

1) Eventually, all mathematical functions will be mapped to spatial areas (registers) and wires.
2) The cache memory will be distinguished from the "active" parts.
3) Input and Output paths will be recognized.
4) They would guess that some kind of input has a clear mathematical origin or function, and that would also hold for the corresponding output.
5) They would have no reason to believe that the alien CPU [fair is fair] does anything else but compute mathematical values. And even if they somehow suspected it, because they have similar devices, they would have no way of understanding the non-mathematical functions of the CPU.
[Translate what happens in a word processing program in geometrical or other mathematical functions and see what you could make out of it!]

You thinks this is again some philosophical, useless hypothetical exercise? Think again. How do you think our scientists usually approach our own brains?

At least these aliens would somehow discover which mathematical functions Commodore 64 was capable of. 
But what if there had never been any mathematical functions in that CPU? This might of course be difficult to imagine at first. Even a GPU is but a monstrous mathematical machine. But we have all at some time used a very primitive remote control for a slide-projector or similar device. The logic used could not be simpler. It was just a matter of connecting the dots to turn the machine on, other dots to turn it off. So, let us make this remote a little bit more complicated, with only one condition attached: no mathematical functions. None of that add or shift nonsense. Just plain commands to turn machine parts on or of, make it move or stop, etc. A simple electrical robot with no intelligence of its own.
We can make this robot as complex as we want, as long as the condition remains unchanged. Every new function is a simple on or off function under the control of a human or another, more intelligent machine. 
In fact, why not build a soulless, brainless body, connect it to a relay chip which would then be, remotely (and in an undetectable way], controlled by our brain. And then give the chip to the Aliens [it was our turn, right?].
Once again, this relay chip would be simply the sum of all On and Off switches controlled by a human brain, plus maybe some simple mechanical reflexes.

But wait! I have a better idea! Let us give the chip to our scientists, and tell them that it is something from outer space! Without of course telling them that it is a simple relay between a brain and a body.
Would they use mathematical models to try and understand it? How could they not? That is how they were trained.
Would that help? Or would that unnecessarily complicate matters? 
And what about the Aliens? Would they have a better chance at reverse engineering the chip than our scientists? Or would they be handicapped by their being non-human?

I leave the answer to you.

Mathematical interpretability
I am not saying our brain is a simple relay between a "real" brain, or a soul, and the body. I am just trying to make a point concerning the danger of blind mathematization of brain processes. Researchers too often identify scientificity with the number of mathematical symbols they can introduce in their text. The success of the physical sciences seems to justify the blind belief in the indispensability of mathematical analysis.
Philosophy knows a long tradition of resistance against such a limited view of non-physical processes from the left and right of the political spectrum. This resistance has never really penetrated the wall of false certainty that has been built around neuroscience. Physiologists, neurologists, and even psychiatrists see their disciplines as the closest you can get to the real god, physical science, and still be studying living creatures. Against the ideological attacks of philosophers, neuroscientists have also reacted in an ideological manner: they sought support and recognition in the "exact camp" and have blindly adopted their customs and values. There are of course many rational arguments in favor of such a choice, and I will certainly not attempt to belittle them.
There is one point though that I would like to bring under the general attention, and that is the role of mathematics in neuroscience.
Mathematical interpretability is not a truth criterion (nor the lack thereof a criterion of falsity). It only shows that a certain view or analysis can be formulated with the help of mathematical symbols and equations.
Too often researchers stop at that superficial phase: interpretation of neural processes in mathematical terms. They think then that they have proven their point, while, in fact, they have just restated the problem differently.
A critical approach to mathematics is very natural among physicists and other "exact" scientists. They will gladly make use of any mathematical tools mathematicians have to offer, but they would never let mathematicians define for them the object of their study or its nature.
In this respect, neuroscientists are more catholic than the pope. Their blind belief in mathematics dangerously resembles heresy: the replacement of the true god by a false deity.
Physicists will not believe until they have been shown empirical proof. Neuroscientists would do well to follow their example. The complexity of the brain has too long been used as an excuse to believe unconditionally in the truth of mathematical explanations instead of considering them as a simple means of investigation. That is why respected researchers (like the editors of the very expensive book "The Neurology of Eye Movements", 2015) can affirm without blushing that mathematical creations like motion detectors, neural integrators and pacemakers are indisputable facts. 

They should be ashamed of themselves for being so gullible!]

The Brain: some problematic concepts
Plausible Networks? (2)

The Neural Integrator Reloaded
In ch. 5, Dray analyzes the "human ocular integrator" that Robinson had tried to sell us.
What is particularly interesting is that his presentation is practically the mirror image of Robinson's. While the latter put the emphasis on the neural network model to prove the plausibility of his, in my eyes unfounded, assumption of the existence of a biological neural integrator, Dray goes exactly the opposite way. Robinson's hypothesis has become an indisputable biological fact that lends credibility to Dray's own neural network!

We could sum their relationship with a new variety of the doctor's game children like so much: 
I'll show yours if you show mine.

Where neuroscientists love to spray mathematical formulas around to show their level of expertise, computer experts do the same with biological terms that must mean as much to their colleagues, and young computer students, as their mathematical equations mean to me: "the paramedian pontine reticular formation (PPRF) [always good to know your acronyms, especially in a party], the retina [yeah, well...] and the semicircular canals [that would be Amsterdam, right?]".

Here is a quote that will sound frighteningly familiar:
'When neurophysiologists studied the oculomotor system they noticed that while all the initial and incoming commands were coded in terms of eye velocity, the eye muscles are mainly position actuators and need to be given a command proportional to the desired eye position: integration is clearly occurring." p.92)  (my emphasis)

Oh! You will love this one: "The neural integrator has something fascinating- it is the first time that the role of a biological neural network, well described anatomically,[uh?] is described in a very sharp way in mathematical terms..." (my emphasis)
Need I remind you that Robinson in fact was counting on his neural network model to support his biological assumptions? 

A fascinating case of Reciprocal Fertilization. Smells like incest to me.

The Brain: some problematic concepts
A Spike Encoder? Seriously?

This is how two M.I.T scientists Saxena and Dahleh ("Real-Time Decoding of an Integrate and Fire Encoder", 2014) put it:
"One of the most detailed and widely accepted models of the neuron is the Hodgkin Huxley (HH) model...." They are referring of course to Hodgkin and Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve", 1952.
We already have seen this model in action with Gerstner (see the entry "Plausible Networks?" in this thread) so I will not repeat myself except by saying that the model is, I assume, mathematically sound, and can therefore be very useful in the study of neural processes as electrical processes. As soon as one tries to take into account other, biological properties, of neurons, this model loses any relevance it might have had. 
I can easily imagine the attraction such a model can have on science students who get the welcome conviction that they can understand brain processes without having to learn a new way of thinking. Brains, and neurons, look so much more familiar when seen as electrical or even as complex computer devices.

Goldberg "Distortion of Neural Signals by Spike Coding" (2007) is not the one who came up first with the idea of a spike encoder, but he is certainly promoting the idea very actively. In "The Vestibular System: A Sixth Sense", of which he is an editor, he considers such a device as one of the necessary steps of the transduction process of vestibular hair cells, the last one to be exact. The spike encoder " a set of conductances in the afferent terminal, converts the postsynaptic depolarization into a train of action potentials, which is transmitted to the brain".(P.45). 

[I found it quite interesting to note that "spike encoder" does not figure in the index even if it is the title of a paragraph and is prominently present in a diagram!]

The authors [the whole book has been collectively edited,] clarify the meaning further: "To transmit information from the vestibular labyrinth to the brain requires that postsynsaptic voltages be converted to spike trains." 
I could hardly believe my eyes when I read this. Goldberg thinks that the passage from chemical to electrical processes cannot be understood unless we assume the existence of a device that would be responsible for such a conversion!
Not only that, after having given a brief account of calcium and potassium currents, and how they are related to spikes, he goes on explaining that "A major source of interspike-interval variability arises from the quantal release of neuro transmitter from hair cells." Apparently this variability is a problem (he does not tell us why though), and is related to the "probabilistic nature of the release". In a previous article ("Afferent diversity and the organization of central vestibular pathways", 2000), he had already indicated the usefulness of such a criterion: "In most of these species, it has proved useful to distinguish afferents as having a regular or an irregular spacing of action potentials". The text in (2012) is dense and ambiguous. We can feel the influence of two thought-streams which are hastily joined together.
In (2007), the first stream is very clear. Goldberg mentions Altneave and Barlow as the sources of his inspiration. Which explains his traditional Shanonian view of neuronal processes.
"Analog neural signals must be converted into spike trains for transmission over electrically leaky axons. This spike encoding and subsequent decoding leads to distortion." 
Spike encoders are then the perfect solution to remedy to this distortion.
"The inputs and outputs of neuronal computations are continuous signals which are expressed as currents, voltages, and chemical concentrations... Because axons—the wires of the nervous system—are electrically leaky, these signals must be encoded as spikes, which are amenable to active restoration and transmission over long distances."

The rest of the article is I am sure very interesting for mathematicians, but is only relevant if you accept Goldberg's premises and interpretation.
Goldberg's spike encoders are one more in the row of wonderful mathematical devices promoted to biological mechanisms that we are supposed to be looking for in the brain the coming millennium.

They embody everything that is wrong with neuro-science: its subservience to the alien concepts and paradigms of mathematics when blindly taken for immutable truths.

The Brain: some problematic concepts
A Legitimate use of mathematics in Neuroscience

As so rightly put by Baloh and Honrubia (2010): "The motion of the cupula can be likened to that of a pendulum in a viscous medium." Mach's (and Breuer's) idea of using fluid dynamics to analyze inner ear processes is put to good use by Steinhausen in "Über Sichtbarmachung und Funktionsprüfung der Cupula terminalis in den Bogengangsanipullen des Labyrinthes" (1927). His model is considered by all to be very accurate, and so far, it is the only example of a mathematical model which makes sense to me. Thinking about this particular fact I can easily see why I do not have any objection to Steinhausen's analysis.
It concerns a real, existing and observable portion of the body or brain, and not a fictive reconstruction or extrapolation based on abstract mathematical arguments. 
And that is exactly where the usefulness of mathematics is at its peak. Mathematics makes it possible to analyze complex processes in their fundamental relationships, and allows a quantitative description of these relationships that then opens the way to all manners of control and further experimentation. Science without mathematics is unthinkable (I know , some philosophers would not agree with me. For instance Field in  " Science without numbers", 1980. But even Field or so-called "fictionalists" do not deny the usefulness of mathematics.). But whatever the epistemological value of "pure" mathematics, it can never be blindly taken as proof of the existence of empirical properties or processes.
The laws and principles that mathematics can unveil, based, partly, on known physical knowledge, partly on mathematical inferences, can be applied in so many different ways that any assumption as to the physical device that should support them is entirely arbitrary. It would be like trying to describe the unique machine that could perform a certain computer program or algorithm. Even if we are right about the algorithm, and that is a very big "IF" when it comes to the brain, we still do not know how it is implemented in practice. That algorithm might itself be the mathematical abstraction of limited aspects of more complex processes which, when considered as a whole, supersedes the simplistic assumptions of the algorithm.
Ptolomy also comes to mind when considering the dangers of wrong assumptions and their impact on our conclusions and calculations. 
Before inventing new equations, Galileo had first to change his way of thinking, and consider the sun, not the earth, as the center of the solar system. 
It is of course conceivable that continued mathematical calculations would have finally convinced everybody of the soundness of the new paradigm. The astrological calculations were after all based on real, observable movements of the heaven bodies. But how long would it have taken to mathematics to change the way people thought?
And if we compare Galileo's exploit with the current situation in the brain: how long would it take before scientists realize that their extraneous calculations cannot explain brain processes?
Seen in an extreme abstract manner, there is, maybe, no reason why mathematical calculations, based solely on the overt behavior of neurons,  would not give a complete model not only of the brain, but of the behavior of the living organism that possesses it. We would not need pseudo-sciences like psychology, sociology, or maybe even economy, to analyze and predict human behavior.
This "eliminative" conception has, as strange as it would seem, many adepts, and I fear this debate, like many others, knows no end. One thing is certain though: it would take a very long time for such an eliminative approach to bear its fruits. Meanwhile, it might be smarter to use other means than only physics and mathematics.
We do not need to revert to "Verstehen" or "hermeneutics" of the German philosophers of the 20th century, but we certainly do not need to close our eyes for the fact that we are dealing not with pure physical and mathematical processes, but with living organisms.

The Brain: some problematic concepts
Are mechanical logic gates possible?
Well yes of course! Already in the 19th century Babbage had attempted to build his Analytical engine, which was later realized by the British Museum. But the devil [George, is that you?] lies in the details. What is exactly meant by "mechanical"? Since the principle is the same let us just use electrical switches and try to build a logic gate with only those switches. We will soon realize that it is impossible. Having one switch On and the other Off, or both On or OFF, gives us the first stage of any possible operation. And then? Whatever we do, we cannot get a third switch to chose between them automatically. Which would be possible if all switches, when turned On (or Off), would move, turn, or otherwise manipulate a physical link to the third switch.
In other words, we need two distinct kinds of mechanical devices. We could, for instance, use running water as energy source, but that water in itself will just keep running around. It will have to manipulate other physical elements, like wooden or metallic switches, or dams, to make the system go from one state to the other.
But how are modern computers built? That is the biggest secret since Santa Claus: computers are not built from logical devices! A logic gate relies on physical principles to realize what we have just tried to do with water and dams. I will skip the juicy details but it boils down to this:
A surface of molecular configuration A is pressed upon a second surface of molecular configuration B, and voilà! Houston, we have power! This little piece of physical miracle is equivalent to the real manipulation of a dam or a switch... Without any human intervention!
It has become programmable!
Bennett, and others, thought that the logical structure of computers was so fundamental, and independent from its material substrate, that it could be built from toilet rolls. Well, sir, it cannot. You will need something else besides those rolls, and by that I mean something that relies on another (mechanical) principle entirely. 
[This of course does not infirm the idea that computers could one day become de bons citoyens, and I wish them all the luck in the world... As long as they do not want to terminate us all, of course.]

What can we learn from this?
I had already pointed out that the physical and the mental need and complete each other. What we have here is a very tangible proof that no single principle can be computationally complete. The fact that all computer parts can be considered as physical has certainly contributed to the general mystification: the primacy of the logical above the physical. This shows that algorithms may be independent of any specific material substrate, but that they not only need one in a general sense, like anything we know of needs a material substrate, but that it is in fact the very nature or the very properties of that substrate that create the logical
So, we could tell Brooks that it is indeed a matter of "stuff"! [see my thread Hearing , the entry Calculations and Sensations (3) ]

Without the physical properties of transistors, or their equivalents (punching cards in the case of Babbage's invention, without which the brass components would be just a mass of metal), there would be no logical operations. To be clear, it is not the fact that logical architectures need physical elements for their material realization that is fundamental, but the physical properties of these transistors or components.
This goes also beyond von Neumann's distinction of hard- and soft-ware which is in itself but a logical distinction since any hardware can be emulated by software [any Turing machine can be emulated by another Turing machine]. Babbage's punch cards allowed, in contrast with a software program [that is certainly debatable], a physical manipulation of the machine, just like the punch cards used in music organs. 

The Brain: some problematic concepts
Are mechanical logic gates possible? (2)
I am afraid I might not have been as clear as I wished. When I say that water as an energy source has to manipulate physical elements like dams, there is still only one principle involved as long as the manipulation is the same. For instance:

open: let water flow
close: stop water flow.

But such a system would be incomplete and functionally identical to a bunch of switches in any configuration. It would still do nothing but move the water around in logically useless circles. It could only work as a logic gate if it did something like:

open: let water flow and move next dam up (or down, left or right) half the distance needed to let the water flow.
close: stop water flow.

Such a system would make it possible to automatically emulate and AND-gate. 

What is important here is that, instead of just letting or stopping the flow, we need now a different action altogether: In one case we would, for instance, push or pull a wedge or lock mechanism; in the other case we could still use the same mechanism, but then we would need something to regulate the amount of force exerted on the wedge to move it only halfway. Half the amount of water normally moving the wedge? That would certainly do the trick. A simple wall would take care of that, dividing the flow in two equal streams without any human intervention. 
By this simple trick we have abandoned binary in favor of ternary logic. A gate can be open, closed or half-open. Three possible values, even though the system logically recognizes only two.
Physically the duality lies in the fact that we have to manipulate not only the energy source (using only half of the normal amount), but also in different kinds of devices. Dams with a lock mechanism in one case; dividing walls in the other.
Furthermore, a computer could never use water energy exclusively. In fact, most of that energy would have to be converted into mechanical or electrical energy, confirming the idea that a single principle cannot be computationally complete.
What does that mean for the brain?
For one thing, neuronal logic gates are probably impossible since they would have to be supplemented by another principle. Whether such a second principle can be found in the brain is something I cannot of course exclude beforehand. But I certainly would not know where to look for it.

The Brain: some problematic concepts
Logic, Mathematics and the Brain

let us take a very simple example:

A) "IF 1+1=2 AND 2+1=3 THEN 1+1+1=3"
B) 1+1=?   [compute]
C) 2+1=?   [compute]
D) 1+1+1=? [compute]
E) Is (A) True? [answer TRUE or FALSE]

Argumentation A-E has to be translated into meaningful computer instructions. Such a translation would look something like this:

10 a=1+1
20 b=2+1
30 c=1+1+1
40 if a=2 AND b=3 AND c=3 THEN A="TRUE" ELSE A="FALSE"

But then we could have c=0+3, or c=1000-997, or any other combination of an infinite series [the same holds for a and b]. The only way to make sure that our computer program is a valid translation of our argument would apparently be to use literal symbols.

We would then have:

11 a="1+1"
21 b="2+1"
31 c="1+1+1"
41 IF a="2" AND b="3" and c="3" THEN A="TRUE"

But what we mean are not literal but mathematical values. It is important for us that 'a' is equal to the mathematical value of 1+1 and not some literal expression indistinguishable from arbitrary expressions like "the way to Rome" like Frege could say. Furthermore, such a program is not executable in this form, we would have to add what we want in fact to be proved: c="3".

A way out of this conundrum would be to equate all equations yielding the same result. 41 would be legitimately considered as True whatever formulas are used for a,b and c, as long as the results are as they should be. 

I see no reason why we could not do that. It would, mathematically speaking, be very sensible and meaningful.

But not logically or philosophically speaking. The fact that we are unable to translate a simple argument without disfiguring it, or less negatively, turning it into something else, is certainly worth further analysis.

How does our brain do that?
It would seem like we take both groups of statements as valid at the same time!

10 a= 1+1
11 a="1+1"

What we are really thinking then is that not only the mathematical values are important, but also the way they are posited and obtained.

in fact, a good translation should then be something like that:

12 a1=1+1; a2="1+1"
22 b1=2+1; b2="2+1"

Unfortunately, that is still not what we actually mean with our statements. What we really need is some kind of Quantum computer where one and the same variable could contain two different values, one mathematical, the other literal! Something Quantum computers, as far as the theory goes, and as far as I know, cannot do. The cat is either dead or alive, it cannot turn into a bat, can it?

The Brain: some problematic concepts
The illusion of optical illusions
One of the most familiar optical illusions pictured in books and articles is the form of a circle that turns into an ellipse when we rotate our view. For instance, the picture of a bicycle wheel stops being a perfect circle when when we look at a bicycle from up front, instead of from the side. The funny thing is that we never get the same impression in reality. A circle remains a circle, how ever we look at it. We would never confuse a ball for an egg, or vice versa.
It seems like the change in form is limited to the two-dimensional shapes of our drawings and pictures. Because 3D-vision is so often pictured as a form of advanced 2D-vision, authors have been relaying this misconception since the nineteenth century. But what if there did not exist any optical illusions, at least, not the way they are presented? Take the different kinds of gray or black spot that we see in different line drawings even though they are not really present in print-form. Are they illusions, or normal phenomena? I personally think that our tendency to interpret everything in 2D prevents us from understanding 3D-vision. Maybe we should stop thinking in term of illusions, and try to find explanations for them. Maybe we should start treating 3D vision as something different from a complex form of 2D-vision, and see so-called optical illusions for what they really are: normal visual phenomena that deserve a full explanation in their own right.
To go back to our familiar example: instead of assuming that circles become elliptical because they do that on paper, we should wonder (at) what makes a circle remain a circle from whichever angle we look at it. Until now, the only (gestalt) explanation is the co-called constancy of perception. Something like the dormitive property of opium.