[First, some considerations concerning a general neural process.]
The Myth of Synaptic Efficacy
This is a widely spread belief that has probably its origin in Shannon's information theory indiscriminately applied to neural processes. Once this view is rejected, the idea that "[t]he extent to which synaptic activity can signal a sensory stimulus limits the information available to a neuron" (Arenz et al "The Contribution of Single Synapses to Sensory Representation in Vivo", 2008) loses any plausibility. (my emphasis)
What can be rejected for the brain as a whole (see the entry "Do we get too much information?" in my thread Retina: Miscellanious) can certainly be put in doubt when dealing with (individual) neurons.
Many concepts related to synaptic efficiency are likewise taken as dogmas, one of them being the probability of secretion of neurotransmitters that is supposed to be enhanced or reduced according to the circumstances. Such a concept, which is obviously a statistical instrument in the hand (and head) of an external observer, is given independent existence in the brain without any other justification than its usefulness. And even that justification is never explicitly given, such a step being probably never considered at all.
The ever implicit idea is that neurons, just like telephone and telegraph wires (in Shannon's mind), or ethernet and internet cables (in contemporary minds), are never 100% reliable. They are all subject to the insufferable necessity of Noise.
The proof? Well, it is generally known [beware of such expressions in an article! All the red lights and sirens should start blinking and wailing!] that a neuron does not always produce enough of a neurotransmitter to provoke a post-synaptic reaction. A sure indication that the process is inherently defective!
This contradicts the idea that the concept of threshold expresses in fact the necessity for the brain to distinguish between meaningful stimuli, and unimportant ones. [see the discussion of Levitan&Kaczmarek (2002) in my thread Neurons, Action Potential and the Brain.]
Whatever the (in my mind, doubtful) value of the concept of threshold, it remains very strange to attribute to the brain such an intrinsic defect. A defect that is rejected by all when it comes to the neural-muscular junction. After all, it is obvious that, as a rule, we can rely on our muscles to perform as we want them to. Which would be impossible if motoneurons were guided by stochastic principles. The vote of confidence in favor of one group makes the mistrust of the rest of the neurons even more apparent and unjustified.
Whatever the, usually unspoken, reasons for such a discrimination, the conception of synaptic inefficiency does make the idea that learning (and memory) is just the way to remedy this defect very attractive. This is after all the principle behind Hebb's assemblies: neurons that fire together have their connections strengthened, neutralizing therefore any uncertainty about their respective firing abilities.
[Even a critical approach such as that of Martin et al in "Synaptic Plasticity And Memory: An Evaluation of the Hypothesis", 2000, has its limits: "We conclude that a wealth of data supports the notion that synaptic plasticity is necessary for learning and memory, but that little data currently supports the notion of sufficiency."]
What is the value of such a conception. In other words, does it really help us understand Learning and Memory?
Input changes the neuron: what could be more obvious? Without this principle we have no brain at all. But what does this change consist of?
Sensory neurons must be able to relay external events the same way all the time. This is the principle that our experience of the external world is, as a rule, dependable.
Natural input should therefore not change the characteristics of a sensory neuron. How about its projections? Could we do away with the rule that rom-neurons should never be affected by any natural input? After all, we are talking about a change in memory and learning, not a change in the genetic makeup of the brain. Even Kandel's view that genetic transcription takes place as a result of certain input patterns, respects the integrity of the brain: something new is created with existing neurons and mechanisms.
Let us take the old and simple example of the sensation of red: no natural input should interfere with the way such a visual neuron reacts to light. Furthermore, any projection of such a neuron should enjoy the same diplomatic immunity. Only the relationships of such a sensation with other sensations or memories may be altered. Does it sound as plausible to you as it sounds to me?
What is then the use of a more efficient synaptic connection of such a neuron with its postsynaptic neighbor(s)?
In "Long-term potentiation [LTP]: What’s learning got to do with it?", Shors and Matzel (1997)
[one of the few critical article about LTP, was cited less than 400 times, while one of the articles that cited it (Malenka and Nicoll, "Long-term potentiation--a decade of progress?", 1999), and which supported the view of LTP as a learning mechanism, was cited almost 2500 times! It is obvious which conception would win a popularity contest!]
expresses the view that LTP, rather than being a learning or memory mechanism "may serve as a neural equivalent to an arousal or attention device in the brain."
This is how the authors qualify their position regarding synaptic efficacy:
"Most would agree, however, that memory formation involves the modification of synaptic transmission. [...] In this regard, we are unable to offer a “better” hypothesis." (par.3.6). Still, they prefer to think that "an increase in gain (and consequent perceptual awareness) could then modify learning by increasing the likelihood that contingent relationships between stimuli are recognized."
In other words, enhanced synaptic efficacy would make objects and events stand out more in our experience.
The problem of course is the circular character of such an approach: we could say in good faith that what makes an object stand out is the attention we give to it. Imagine yourself running away during an air raid. You are surrounded by explosions and buildings being destroyed under your eye, but all you can think about is the school your children go to and which you hope to reach in time. Somebody else with no children, or who has already collected them, would of course react quite differently to nearby events.
The fundamental consequence of the distinction between rom and ram-memory is far-reaching: not the neurons that relay the stimulus should be the ones to change, nor the neurons with which they are already connected, but in fact exactly those with which they have no connection yet. Having a bad experience that makes us reevaluate how we react to a certain object, does not make us forget how we used to react. Stopping with loving someone, or even hating him or her, does not make you forget that you did love that person once.
That would mean that the so-called effects of synaptic efficacy observed in neurons have in fact nothing to do with memory or learning, or even attention!
The Cerebellum does not make sense
At least, to me it does not.
If we believe (and I have no reason not to) the anatomical/physiological description that has been accepted as a dogma since Eccles et al ("The Cerebellum as a Neuronal Machine", 1967), then we are dealing with a dream come true for brain scientists:
- a regular topography easily distinguished in the lobes and vermis (the in-between area);
- two input paths which have mostly been mapped;
- interneurons with easy to find connections to the main cells;
- one single output path.
How better can it get?
But what does it do?
There was a time that the Cerebellum, notwithstanding the many discordant reports (see the preface and the first chapter "Historical overview" , in "The Cerebellum and Cognition", 1997, by Schmahmann, editor), was thought to be mainly, if not exclusively, an organ for the control of movements. The situation is now changed, and even one of the co-authors of (Eccles 1967) has no problem with a much wider functionality of the Cerebellum (Ito "The Cerebellum: Brain for an implicit Self", 2012), though he still concentrates in his analysis on the movement aspect. Marr, whose brief foray in the area of the cerebellum before he set out to revolutionize the field of (computer) vision, is completely focused on the conception of the cerebellum as the site of automatization of conscious actions. (Marr "A theory of cerebellar cortex", 1969; "A Theory for Cerebral Neocortex", 1970a ; Blomfield&Marr, "How the cerebellum may be used", 1970b; Marr "Simple memory: a theory for archicortex", 1971; see also, for a theory very similar to Marr's: Albus "a theory of cerebellar function". 1971; and from the same author "The Marr and Albus theories of the cerebellum
Two e a r l y models of associative memory ", 1989).
But for us, readers, the expansion of the functions of the cerebellum only makes the situation worse. It was already difficult to understand how the cerebellum could fulfill such a clear and definite function, as proposed by Marr, and taken over by Ito and many others, of helping the cerebrum in automatizing action patterns.
It seemed, at least on paper, a neatly delimited operation: when we are practicing a new set of actions, like learning how to play tennis, our brain has to be very careful and methodical in the planning and executing of all the movements necessary. Just try to remember how difficult it was not to look at your feet when you were first starting to learn how to drive. The more we practiced, the less we had to think about what we had to do in different situations. That, according to Marr, was the doing of the cerebellum!
Okay, I can buy that. It makes perfect sense that such a process is happening somewhere in the brain. Why not in the cerebellum?
But then we get the incredible figures (from Eccles 1967) thrown nonchalantly at us, and innocently repeated the last half century without a hint of discomfort:
- the number of spines is about 180.000 per Purkinje Cell (PC), the sole neuron out of the cerebellum. [Marr (1969) felt generous, and made 200.000 out of it.];
- the internal connections of the PC's are also very complex and numerous. Which should not surprise us. After all, before coming to a conclusion, an output, a Purkinje Cell has to gather a lot of information from a lot of places, right?
- likewise, the other input path, via the so-called mossy fibers (MF), is a very complicated happening, with all kinds of interneurons, some excitatory, other inhibitory. You will understand that I really do not feel like going into these details. Certainly not right now.
- luckily, the first (or second if you prefer), input path, via the climbing fibers (CF), is much simpler, one CF to a PC Just the way I like it!
Let me first remark on the suppleness of Marr's mind. he has no trouble attributing very complex operations to a single neuron ("each Purkinje cell [can] memorize 200 or so different mossy fibre inputs", 1970b), but also giving a single output task to a single neuron ("each olivary cell corresponds to a 'piece of output' which it is necessary to have under control during movements.", 1969). That has of course everything to do with the way he viewed memory (1971). A conception that anticipates the methods used in his seminal book on vision (1980).
I will be brief and clear about my own biases: without the conviction that neurons are some kind of mini-computers, each capable of many operations, Marr's neural analysis seems completely bogus. I will therefore leave it for what it is, at least for now, and concentrate on the main difficulties as I see them.
Let us also limit ourselves, for now, to only one main function, the traditional one as seen by the pioneers already mentioned: coordination and automatization of movements.
The main theme will therefore be: Which neurons represent which movements?
1) - What could be the use of such a number of synapses on the PC's? Are they meant to activate different parts that somehow would later be assembled to a significant whole?. What could that be?
Or could it be that the number of spines is irrelevant to the function of each individual PC?
2) What is the output of the cerebellum? Or should we not speak of plural outputs, even when considering a single pathway, a single main function (movements), and a single cell type (purkinje cells)? And why is it inhibitory? Do we maybe need to redefine inhibition?
3) Once we have answered both groups of questions, we should be able able to analyze the relationships between the different interneurons more easily.
Still, we should at least pose the following question before proceeding with (1) and (2).
0) What can a single Purkinje cell represent, and how does that impact on the possible nature of both inputs?
In other words, we cannot hope to understand what happens within the cerebellum (or any other part of the brain), without a clear idea of the nature of neurons. The fact that there is, as the consensus goes, only one output pathway, should offer a very clear way, in the long run, for checking the plausibility of our models.
Oh, before I forget.
I am making no promises whatsoever regarding how far I can come with my analysis.
"Please allow me to introduce myself"
I am a man who has been frustrated beyond limit by a litterary style which promises heaven and delivers only headaches!
Can somebody please tell me what codons are?
Here is a not completely humorist report of my attempts to understand Marr's proza.
"The synaptic arrangement of the mossy fibres and the granule cells may
be regarded as a device to represent activity in a collection of mossy fibres
by elements each of which corresponds to a small subset of active mossy
I am really surprised the reviewers did not reject this complex sentence as barbaric and obscure! Especially since it does not add anything to the meaning Marr wished to convey.
Let us try to dismember it!
1) The synaptic arrangement of the mossy fibres and the granule cells may be regarded as a device...
2) [this] device represent[s] activity in a collection of mossy fibres by [certain] elements...
3) each of [those elements] corresponds to a small subset of active mossy fibres.
Let us take the expression "a collection of mossy fibres" away since it appears to be redundant.
"The synaptic arrangement of the mossy fibres and the granule cells may be regarded as a device to represent activity by elements each of which corresponds to a small subset of active mossy fibres."
Let us get rid of "elements each of which corresponds to " since it only adds to the confusion, before we change the rest accordingly from singular to plural.
The final result is:
"The synaptic arrangement of the mossy fibres and the granule cells may be regarded as a device to represent activity by small subsets of active mossy fibres."
Obviously "the synaptic arrangement" itself is not a device, but can be analyzed as "small subsets of active mossy fibres".
Anyway you look at it, it either hides a very deep truth that I am unable to fathom, or a very disappointing triviality: mossy fibers are connected to granule cells.
And that is where the 'codons" get in:
"a codon is a subset of a collection of active mossy fibres." Now we know what all those cryptic sentences meant! It was all to introduce the concept of codon in such a mystifying way that once you found it you had the impression of making a great discovery!
An important specification is:
"The representation of a mossy fibre input by a sample of such subsets is called the codon representation of that input..."
Wow! Since "a sample of such a subset" is a codon. Marr is actually confirming that "The representation of a mossy fibre input by a codon is called the codon representation of that input..."
Here is an important question: is a "subset" a unity or a set of more than one fiber?
The following precision does not bring any extra clarity:
"a codon cell is a cell which is fired by a codon."
Which would mean:
a codon cell is a cell which is fired by a subset of a collection of active mossy fibres.
To which neurons do these codons refer? When a group of mossy fibers fire what do they reach? Is there a single target, or is it a multitude of targets that then gets the collective appelation of "codon"?
But wait, Marr is not finished:
"The granule cells will be identified as codon cells, so these two terms will to some extent be interchangeable."
Until what extent? Does that mean that sometimes they will not be interchangeable?
That still does not answer our question: is each granule cell a codon, or do we need a (sub)set of them to make a codon?
Oh, I am so sorry, my bad!
The "codon cell" is the granule cell, not the codon itself, and it is fired by a codon, which is a subset of a collection of active mossy fibers.
What could "fired by" possibly mean? That a whole bunch of mossy fibers target one single granule cell at the same time? That would be then a codon?
That is apparently what is meant by "codon"! Why didn't he just say so?
Noo! That is a pattern! A pattern is formed by all the mossy fibers that connect to a granule cell. This granule cell can be activated by any number of those fibers, and that would be the codon!
Wrong again! A pattern is formed by all the "active" mossy fibers that connect to a granule cell. And a codon forms a subset of these active cells.
So, in any case, a granule cell contains all possible information about codons? How does it pass on this specific information?
How should I know? I am not there yet!
Marr continues with:
"The size of codon that can fire a given granule cell depends upon the threshold of that cell".
This confirms the simple interpretation of a bunch of mossy fibers stimulating a granule cell. The more resistance the latter offers, the more fibers will be needed to surmount that resistance. Makes sense.
Oh wait, no it does not!
That would mean that the largest pattern takes it all? Or the first one that happens to be strong enough?
"the mossy fibres which synapse with the granule cell determine the codons which may fire that cell."
There is no one-to-one correspondence between codons and granule cells. Many codons can fire the same granule cell.
But how do the codons know which one of them is allowed to fire the granule cell? Or do they all fire at the same time?
Let us forget about codons and just look at the active fibers that somehow synapse with a granule cell.
According to Marr, not all are allowed to activate the granule cell, even if they are themselves already active.
"The size of codon that can fire a given granule cell depends upon the threshold of that cell, [I forgot to add that last time] and may vary..." It is now the granule cell that determines which codons can activate it through its threshold. But even if this threshold can set not only a minimum, but also a maximum, we are back to the rule " the largest possible pattern wins", whatever the concrete situation the neurons are supposed to represent.
Never mind. let us be patient and plod on.
He then comes up with, for me, incomprehensible formulas that are supposed to prove that, somehow, there has been a reduction of patterns from one cell to the other, that is from the mossy fibers to the granule cells. I do not even want to ask if Marr's formulas make any sense to mathematicians, because I honestly do not care! We are talking about the contexts in which movements are taking place. Marr pretends that he is able to express these contexts in mathematical terms that are not derived of an analysis of physical movements in any way, but represent a priori calculations of probabilities. And then he claims that the number of patterns involved has plötzlich diminished? This is pure magic!
Okay! That's it! I give up!
Is the Cerebellum a Neuronal Machine?
Ever since Eccles (1967), the authors, and most enthusiastically, Ito, have tried to get the computer community to endorse their analysis of the cerebellum by the creation of neural network models. That is how Ito (2012, ch.3) expresses how he felt at a symposium he and his co-authors had organized to get these specialist on board: "I was frustrated enough at the Salishan meeting to ask what else experimentalists would need to uncover before we would be able to understand the meaning of these wiring diagrams." I found his honesty quite disarming when he added:
"Someone equally frustrated replied that the available diagrams were too simple to construct even a primitive radio, so more information was urgently needed before any meaningful model could be conceived." (my emphasis)
You can imagine Ito and his colleagues' relief when Marr came up with his theory that turned this "primitive radio" into a huge success!
But how justified were they in their relief? Even assuming that the cerebellum is exclusively concerned with movements, can we describe what happens in this part of the brain in computational terms, and still keep within the boundaries of biological plausibility?
Let me try a very general approach first.
Suppose I am shooting hoops, trying to get the ball in the basket without it touching anything else. You could say that I will need each time to adjust the "weights" controlling my muscles. And for that, I need visual information (which is certainly a form of feedback, as are the proprioceptive sensations provided by my body) that I can translate in new, hopefully more successful, movements.
Let us assume that the feedback information "made me think" that I needed, shooting from the same position, to put more power in my movements. I will not bother for now with the question how we are able to fine-tune the intensity of our movements for them to be a little more or a little less intense, something we all can do, but concentrate on the fact that we can do that at all, however crudely.
How do we control the intensity of our movements? How can we produce more (or less) neurotransmitters at the neuro-muscular junction?
The answer to this question may well be the key to fundamental brain functions.
Let us forget the (probably) unanswerable question of Will, and concentrate on the neural mechanisms that our will necessarily has to use to express itself.
Also, it would certainly be interesting to know if this ability of control is limited to the neuro-muscular junction. Can we, through sheer will, influence the quantity of neuro-transmitter produced at other locations in the brain which are more or less already under our control, and which do not involve muscles? Do such locations exist at all?
In a less speculative perspective:
Is there a neural mechanism to control neuro-transmitter release other than through sensory input?
In our example, the fact that I did not reach my goal, was reason enough to try again. Also, the feedback information somehow determined if the following release of neurotransmitters needed to be intensified, or reduced.
Let us assume that there is some kind of automatic reflex playing out. The feedback has, in this view, taken over the role of a direct stimulus capable of increasing or reducing the release.
Ever since the fetus becomes capable of movement, it keeps trying out its muscles. Babies, very often, look like they are working out while lying on their back. They are mapping their movements to their own intentions and sensations. By the time we can play basket ball, we are pretty well acquainted with our body and how much energy each movement approximately needs. I say approximately, because even familiar situations vary in infinite ways. Nowhere is this more obvious than when we are trying for very precise movements like shooting hoops.
That is why professional sporters need to train so often. They need to learn configurations that are in principle infinite. If age did not take care of it, they would keep learning until the end of times. If the brain could use mathematical formulas, we would be all professional sporters by the age of 18 or less.
Assuming the existence of a memory mechanism, a database as it were, of specific movements and spatial configurations, neither demands geometrical computations nor a mysterious quantic gate in the brain.
Not even the direction of the change (more or less power) could be under the conscious (or unconscious for that matter) control of the brain. That would lead us into an infinite regress. The direction of change can, in this situation, only be an effect, never a cause. [One of the many reasons why neural networks keep George busy.]
We can now, I think, answer our question: we cannot control the amount of neurotransmitter used by each movement. That amount is an effect of the stimulation level of the muscle. It is a chemical reaction, also an effect, and never a primary cause (thank you Aristotle for this ancient but clear concept).
Do we have then the memory of stimulation levels? That would bring us back to the matter of neural codes in a hurry!
Let me state my, speculative, view on this matter.
1) No, the brain has no way of registering stimulation levels other than by the intensity of the sensations we experience.
2) The neural record or memory of these sensations is therefore an abstract, clinical representation of the sensation, be it color, sound, or contraction of a muscle.
3) Sensations, including those concerning us here (more or less power to a movement), cannot be considered in isolation. Each sensation will be the result of a huge number of connections all though the brain, making its identification easier.
4) last, but certainly not least. Sensations that we feel different from each others have to be somehow kept apart. Even (3) could not explain all the fine distinctions between the many nuances of a same sensation. I am not sure it is a completely "physical" process in the usual sense. Memories must have a way, just like external or internal stimulations, to (re)produce those sensations.
5) We will have to rethink the concept of causal efficacy in a materialist perspective.
6) We should take example of the work of Weber. (see my thread The Brain: some problematic concepts)
If the cerebellum does not need to, or cannot compute movements, how can we then be sure that is has something to do with movements at all, except in a very general, indirect way?
The General Role of the Cerebellum as seen by Marr-Albus-Ito
- JD Boyd "A case of neocerebellar hypoplasia", 1940;
- Glickstein "Cerebellar agenesis", 1994;
- C.A.R Boyd "Cerebellar agenesis revisited", 2010
- Manto et al (eds.) "Handbook of the Cerebellum and Cerebellar Disorders", 2013.
- [see also in my thread The Brain: some problematic concepts the entry "Learning from your mistakes: Neural networks and their significance"]
Ito (2012, p.42) thinks that the fact that feedback mechanisms can be either conscious or unconscious can make a difference as far as the biological plausibility of neural networks models of the cerebellum:"It could be reasoned that the initial feedback control was performed consciously, whereas the later feedforward control by the cerebellum is performed unconsciously, this idea being in good general agreement with our daily experiences."
This does not change a thing to my analysis. The unconscious property of the adjustment of weights does not change its fundamental local nature. In fact, it makes it even more acute. Such a mechanism is completely cut off from an important function of the organism and must prove itself self-sufficient. What, of course, it cannot do.
Furthermore, the idea that the cerebellum takes conscious actions and turns them into unconscious ones does not seem to stroke with the empirical facts. ["Impaired coordination of voluntary movement is a cardinal sign of cerebellar disorders" , Novak et al "Deficits of Grasping in Cerebellar Disorders", in Manto (2013)].
Patients with cerebellar lesions often fail simple motorical tests like touching your own nose with your finger. Such movements should be performed faultlessly under conscious control according to the Marr-Albus-Ito conception. Also, in general, people suffering with cerebellar agenesis (born without a cerebellum) show at least mild impairments of their motorical skills, even when they are consciously trying to perform them.
Even researchers with opposite views, like Glickstein and C.A.R Boyd defending his grandfather (1940) against the attacks of the former in (1994) agree that the absence of the cerebellum means at least a minimal deterioration of simple everyday movements. In the words of young Boyd: "Taken together, a consensus seems to emerge, namely that adults may lead independent lives and have gainful employment dependent on motor skills in the absence of a cerebellum. However, the intellectual and the fine motor skills of such individuals may often be outside normal limits."
Nowadays, researchers prefer to speak of the "smooth control" the cerebellum is supposed to play in the production of movements. The jerky and uncoordinated aspects of animals and humans with cerebellar lesions or malfunctions would seem, in their view, to indicate that the cerebellum is responsible for making smooth and coordinated movements possible.
That is a very important shift away from Marr's original idea. A shift that would certainly necessitate a completely new analysis of the respective roles of the different cells in the cerebellum.
Which brings us to my second remark
Neural networks are based on the analysis of a problem delimited by the researcher, while in real life it is exactly the reverse: the way neurons are connected to each other in a brain, determines how a situation is experienced, and our reactions.
This must urge us to extreme caution in the interpretation of our models. It confirms the idea that the role of a local neural network is determined not by the problems as we see them, but that these same problems are an effect of the neural configuration.
Neural modeling is in fact turning things around, it is considering the effect (the problem as the researchers see it) as the determinant of the neural configuration that is causing it!
This would seem like an egg and hen puzzle, and therefore justify the pragmatic approach to neural networks.
Saying that the effect of an action A can help us determine the cause of A seems very reasonable. After all, that is why lesions are considered as a legitimate source of knowledge concerning the possible functions of the concerned organs or brain parts.
Looking at the uncoordinated movements of a cerebellar patient, we can wonder at what could possibly cause such a defect. Localizing the origin is a first necessary step. The second being trying to understand how that part functions normally, and why its lesion would produce such anomalies. This is where the creativity and resourcefulness of the researcher comes in action. Neither the lesion nor its putative effects give an unequivocal answer as to the possible cause(s). This is something that researchers who put their blind trust in the explicative powers of neural networks much too easily forget.
Added to all the reasons stated earlier, I can only advocate a minimal use of neural models in trying to ascertain brain functions. Their usefulness is limited, to say the least.