Neurons, Action Potential and the Brain
I would like to present a short and general review of a book treating of low-level processes concerning neurons. This book raises many questions concerning the nature of sensation, and its philosophical, theoretical and methodological consequences. I propose to to leave the discussion of the questions to after the analysis of these low-level processes, and the assessment of their general significance. I think it is very important to lay down a scientific foundation for the discussion, and that can only with a thorough understanding of the chemical processes that determine the neurons's behavior, and ultimately, that of the brain as a whole.
I am assuming that this book is representative of the current scientific view on the relevant chemical processes in the brain, and does not present a controversial, or obsolete, interpretation of said processes.
As will come clear by reading the following lines, this book present quite a challenge for the conceptions I have been developing, concerning vision, neurons and the brain in the general. I hope to be able to show that far from contradicting those conceptions, it in fact reinforces them. But that is of later concern. Even without those considerations, the analysis (of the analysis) of low-level processes should offer enough material for us to think about.
Levitan&Kaczmarek: "The Neuron: Cell and Molecular Biology", 3rd edition, 2002.
Allow me to start with some stylistic, and therefore secondary remarks.
"We will provide some examples of how change in its electrical properties allows a neuron to regulate different types of behavior." (p.62) (my emphasis)
Such a sentence is quite representative, unluckily, for the ambiguous style of Levitan&Kaczmarek. It appears that they mean different neurons, with different electrical properties, which is radically different from the same neuron having different electrical properties, like the text seems to suggest.
Another annoying aspect of their writing style is the adoption of "Shannonian speak" without anything to show for it, as I hope to make clear:
" The "primary functions [of neurons] are to receive, modify, and transmit messages... We will concentrate first on the axon, the part of the neuron responsible for transmitting information from one part of the cell to another" p.47 (my emphasis)
And then comes what looks like a final, lethal strike against my conceptions:
"All neurons are not created equal. Even neighboring cells may be distinct in their electrical properties and exhibit different patterns of endogenous electrical activity" p.315 (my emphasis)
Before I go any further, let me state that I found the book very enlightening and thorough, and that I only regret that the undeniable technical knowhow of the authors is not supported by their overall insights. Their knowledge of low-level processes surpasses by far any understanding they may have of the functioning of the brain as a whole. Let me try to justify such a harsh evaluation. I will content myself with the listing of the different aspects of the research, without really giving any details. First, it would take very long to explain in detail each point, second, I hope, by omitting those details, to draw the attention on the main lines.
Different types of neurons:
- There are neurons that connect to other via a gap junction, or electrical synapse, while others (no figures are given, but the suggestion is that it they form a majority) use a chemical synapse.
- There are three different types of spontaneous firing, or resting potential: silent (no spikes at all); beating (many spikes following each other with no interruptions); bursting (groups of spikes separated by refractory periods).
- The action potential is not the same for all neurons: the graph of action potentials of different neurons can show different shapes (p.61).
- Neurons with a chemical synapse synthesize different neurotransmitters and hormones.
- neurons interact (the authors would say communicate) with the outside (and also internally) via ion channels that have very different and distinctive properties. Different neurons have different ion channels.
- An action potential can have three types of effects on the amount of neurotransmitter emitted, showing either an increase or decrease of that quantity: facilitation (stimulus train with a progressive increase); potentiation (an action potential following a stimulus train would show an increase of neurotransmitters released in comparison with an identical but isolated action potential); depression (a progressive decrease with each following spike). p.211-213
But the same nonetheless:
- First, something I did not know and which I found really surprising. Neurotransmitters (and hormones), have only one function, and that is, to get the target neurons to open their ion gates, which brings about a change in the electrical balance, thereby triggering an action potential. Once the neurotransmitters have done that (opening the gates, and not anything else besides that), their task is done and they are reintegrated in the system. In other words, they have no influence whatsoever on what is happening inside the targeted neurons! They open the door but do not get in!
[If you think serotonine is what is helping you with your depression, then you are wrong. All it does, and that is certainly not something to underestimate, is activating the neurons that will make you feel good! In a science-fiction scenario, electrical impulses to the concerned neurons would produce the same effect.]
That should make the second point easier to understand.- The reaction of a neuron to a stimulus depends, besides the intensity of the stimulus, only on the neuron itself. In other words, the neuron does not react to a message, itself is the message! And even the intensity of the stimulus can be understood as the repetition of the same message (wake up!) over and over again, until it stops. Ion channels will open (or close), neurotransmitters will be released as often as an action potential comes through, and that is the end of the story as far as the neuron is concerned.
- The fact that ion channels come in different flavors, and modulate the internal/intracellular reactions of the neuron, will of course have an influence on the rate and quantity of neurotransmitters released. But once again, the frequency or firing pattern do not change the nature of the ion channels. A faster firing just means that they have to work faster as well. That is, as far as the stimulating neuron is concerned, the targeted neighbor is nothing more than a simple wire passing the current to the one following it. But since we can say that of every neuron, we are irremediably led to the conclusion that it is not the inherent properties of the neurons that come in the first place, but the neuron itself. The fact that Neuron A has been stimulated, and not neuron B, is significant, the properties of A coming into play only after it has been stimulated.
These properties are of course not random, so we could say that, in evolutionary terms, they are the reason their neuron was targeted, but being the ultimate cause is not the same as being the cause now. What counts for a living brain is which neuron will react to the current stimulus. We must also realize that it is not so much the internal chemical properties, which are the same throughout a very large group of neurons, than the, still unknown, connections of these neurons with other neurons, maybe in distinct parts of the brain , that are here significant.
While the authors continuously make use of the language of information theory, they describe in great details how each step is followed by the next thanks to the chemical properties of the elements involved. In other words, they describe deterministic processes, but apparently prefer the language of another discipline altogether. Speaking of communication and messages is really confusing when you make it clear that the so-called message is a chemical reaction that in turn provokes other chemical reactions. We would find it certainly strange if a scientist started explaining the movements of the molecules in boiling water as "mass communication", wouldn't we?
Neurons, Action Potential and the Brain
This theme is certainly a favorite when it comes to the philosophical implications of vision. (see the many contributions in "There's Something About Mary", eds Ludlow, Nagasawa and Stojar, 2004, discussing Jackson's hypothetical personage, Mary, who grew up in a black and white environment before, as an adult, being allowed to step outside and experience color for the first time. Both articles of Jackson, 1982 and 1986, each with a different position, are reproduced in this anthology).
I will not tire you with the history of Trichromatic Theory (Young-Helmholz), and Helmholz' nemesis (Turner "In the Eye's Mind: Vision and the Helmholtz-Hering Controversy", 2014) counter-theory, the Opponent Color Theory. You can also try to understand what color is by reading technical books like Valberg's "Light Vision: Color" (2005), or Fairchild's "Color Appearance Models" (2013), both of which will make wish you had been wise enough to content yourself with the simplistic explanations found in textbooks. Because, in the end, it becomes painfully evident that nobody knows really what they are talking about. Sure, Fairchild, and to a lesser extent, Valberg too, will teach you about the history of color and the different systems that have been, and are still used since 1931 to correlate physical processes with the subjective sensations of color. You will learn that all this time experts have been trying to translate subjective impressions into mathematical formula's, with very little success. And maybe, just maybe, you will learn that the different types of receptors that are supposed to give us each a distinct color sensation (and let us please stick with Red, Green, Blue, it is certainly good enough), become indistinguishable when the intensity of the light is high enough. In other words, that a "red" cone will also give a sensation of blue or green if enough corresponding photons fall on it.
Strange enough, such a fact, which had already been studied by Hartline in 1935 (Graham&Hartline: "The response of single visual sense cells to lights of different wave lengths"; this article is neither mentioned nor referenced in Hofer & al "Different sensations from cones with the same photopigment", 2005 which appeared 70 years later.) does not seem to have had any influence on the approach of color in the field. I find it strange because it confronts us with a fundamental question that was central to the preoccupations of the 19th century pioneers, including Maxwell (see "The Scientific Papers of James Clerk Maxwell", ed. Niven, vol.2, 2005. Especially the articles about color vision): the relationship between the physical properties of light, and the more intangible aspects of sensation and consciousness. These questions which had seen a revival with Jackson's thought experiment, play no role whatsoever in the "scientific" treatment of color and color vision.
And that is certainly strange, because if there is one conclusion that pushes itself forward, then it is the fact that there is no physical trace of any color sensation whatsoever in the brain. Color is a typical, not to say paradigmatic, aspect of the so-called neuronal correlate of consciousness (Chalmers "What is a neural correlate of consciousness?", 1998), especially since it concerns not only humans, but also organisms as simple as a sea snail like the Aplysia Californica.
I understand the drive of textbook writers who have to keep everything digestible for the average student. Still, the refusal of researchers to tackle this problem even though there has been hardly any progress on the matter since the beginning of times, all the technical details notwithstanding, is, as far as I am concerned based on the fear, understandable but unjust, of getting drawn into the metaphysical swamps philosophers, especially those of Anglo-Saxon cultural background, seem to enjoy swimming in. The question "Where and how is color coded in the brain" can then become, once we have asserted that such a question either is unanswerable, or must be answered in the negative, the next fundamental question: "What does that mean for the brain?" Nobody would blame the scientists for refusing to discuss the possibility that non-material processes have to be considered in the analysis of brain functions. That is certainly a bridge too far, a bridge that will only be crossed the day philosophers will come with options that will make it possible for scientists to uses the tools of their trade. And that day may of course never come.
So, this is the fundamental question I propose to treat in this thread: what does it mean for the brain that there is no neural or chemical trace of (color) sensations in it? But first, I must of course try to convince you that there is, indeed, no such trace.
Neurons, Action Potential and the Brain
On and Off neurons
Hartline ("The response of single optic nerve fibers of the vertebrate eye to illumination of the retina", 1935) is the absolute ground zero as far as these neurons are concerned. Hartline was the first to mention On and Off responses to illuminations, and that makes this article worth a very close scrutiny.
The first thing to observe is that it is a "pre-receptive-field" article. He will introduce that concept only a few years later. For now, he is concerned with registering the responses of single optic fibers, but has to abandon the methods Adrian had used for the study of "the simultaneous activity of large numbers of optic fibers", without being able to fall back on those he himself had used for the study of Limulus' vision. Vertebrate neurons are much more densely packed so a new way had to be devised.
He starts with an excised frog eye, and works delicately on the optic fibers to isolate a couple of fibers, and even preferably a single one. "It is not until the bundles have been dissected down until only one, or at most only a few, fibers remain active that a new and striking property of the vertebrate optic response is revealed. For such experiments show conclusively that not all of the optic nerve fibers give the same kind of response to light." (my emphasis)
Please remark the ease with which Hartline mentions the fact that his results are not exclusively based on the study of the responses of a single nerve fiber. He had already mentioned earlier on: "Attempts to obtain single fibers are successful in only a very small percentage of trials". I am not suggesting that Hartline's preparations or his results were faulty, only drawing the attention of the reader to the fact that such an "open mindedness" concerning the number of neurons used for analysis is, in a way, already a precursor of the concept of Receptive Field he will introduce in 1940. He would surely have chosen a different path if he had stuck to the principle of "one receptor one optic fiber" that was the basis of his study of the Limulus. His choice was technically understandable, it was not possible to study single receptors in complex organisms, but the ease with which he forgot this fact remains astounding. Had he been more tuned to the difference in approach, and the consequences of it, he would have certainly been more critical of his own results. I can already say that, long before the formal introduction of receptive fields, his conclusion that there are On and Off neurons, can be understood as a premature application of that future concept.
The well known results of his experiment were the three types of responses that have become classic paradigms in the science of vision:
- a burst, followed by a steady discharge as long as the stimulation lasts, just like by the Limulus;
- response only to the light being turned on or off (onset and cessation of stimulus);
- and the famous Off response, whereby the fiber only responds to the cessation of illumination.
Hartline was apparently thrilled by the last two reactions. They made the distinction between the responses of the Limulus and the vertebrates very sharp and clear. What should have been a reason to look more critically at the results was taken as the confirmation of what I can only call an intellectual prejudice: the vision of complex animals had to be itself more complex than that of simpler organisms.
The figures he mentions right away would have otherwise certainly made him look twice at the results: "But while Limulus optic nerve fibers invariably show this type of response [the first one], in the frog's retina it is obtained in less than 20 per cent of the fibers." He continues without any hesitation: "At least 50 per cent respond ... with a short burst of impulses at high frequency when the light is turned on, but show no impulses as long as it continues to shine steadily; when the light is turned off there is another brief outburst of impulses."
I cannot help but think that you could not get a better hint that your results were somehow off track. If that were true, that would mean that whenever the light is turned on, our vision is working at half capacity, and remains that way until the light is turned off, allowing the other half to take over!
That does not strike me as very plausible.
Neurons, Action Potential and the Brain
Action potentials and frequency of firing
Let us look at the myth of action potentials and firing patterns more in detail. This is a myth that the authors are upholding without any compunction ("we can see that the information about stimulus strength is now encoded in action potential frequency", p.342) in their otherwise excellent study book. The problem with such an affirmation is that it is undoubtedly true. A scientist can deduce the stimulus strength just by looking at the firing pattern or frequency! So the latter (frequency) can be construed as being the code for the former (strength). But is that also true for the brain? Doe the brain use really frequency codes, or firing patterns as a code?
Psychophysics data indicate that the strength of a stimulus is experienced as the intensity of a sensation. The question is whether the strength of our sensations is retained in our brain, and if it is, whether the mnemonic trace of this strength is as detailed as the sensation itself. That could of course indicate a form of coarse coding, but of coding nonetheless. Let us follow the path firing patterns take to see if we can answer this question.
The intracellular space (and/or the inner and outer membranes) of a neuron can be made more positive (depolarizing), or more negative (hyper-polarizing), with adequate electrical stimuli. Apparently, the strength of negative electrical currents has another effect on the neurons than positive electrical currents. In the first case, "The size of the change simply mirrors the applied amount of current stimulus" (p.50). These so-called "passive membrane properties" are also observed when using small positive currents.
The difference in effect between (high) negative and (high) positive currents is certainly interesting, but remains unexplained. At least, the authors do not offer any explanation for this phenomenon, so that I can only, for now, register this mystery.
It takes positive current to depolarize (make more positive) the cell and create an action potential. I think we should stop a moment and ponder this fact.
An action potential means that whatever is happening in the brain does not stop with the current neuron we are studying, but is being propagated, in whatever form, to other neurons.
The absence of an action potential in a hyper-polarized state certainly does not mean that nothing is happening to the organism, and that there are, therefore, no sensations involved. This consideration puts the functional distinction between On and Off neurons in a completely different light (no pun intended). Some authors, following Kuffler (1938), interpret the difference as that between light and dark, usually black (Peter Gouras thinks it is rather Blue, see his chapter in Helga Kolb's site Webvision). I think that both views indicate a rather naive interpretation of an apparent dichotomy which, let us not forget, is based on the concept of receptive fields. This is of course only in the cases where Off neurons are identified with a hyper-polarized state, not when they do produce an action potential.
In the latter case I will give an everyday example to show that the dichotomy probably does not hold either. Imagine entering a not very brightly lit room (maybe the curtains are drawn, or it is a cloudy day, or dusk is approaching, or all of the above), which prompts you to turn on the lights. You then decide that it is too bright, and turn them back off. Your retinal receptors and neurons will certainly react to the change in illumination in both cases, but will that turn them into On and Off neurons? Or will it just show the reactions of the same neurons, once to the lights turned on, then to the lights turned off? The fact that there are no apparent differences between different types of neurons (On, Off, reacting to different features, etc.), seems to reinforce this view.
The idea that only reactions that produce an action potential are worth further study is illustrated by the concept of threshold. As Levitan and Kaczmarek put it:"The threshold is essential to ensure that small, random depolarizations of the membrane do not generate action potentials." They continue brazenly: "Only stimuli of sufficient importance (reflected by their larger amplitude) result in information transfer via action potentials in the axon." p.53
The threshold is, just like firing frequency, something that scientists can determine in an objective manner. It is therefore tempting to attribute to it, as inherent properties, those that are useful to us. It remains a question whether such a concept makes any sense when considering the brain as such. It could as well be the case that different amplitudes mean different inner experiences, with some pouring over their border as it were, while others just die out with the stimuli that triggered them.
Let us keep following the authors in the quest for neural coding: "Let us now examine the way the refractory period contribute to neuronal information coding". p.55
The concept of refractory period is very easily understood if you compare it with the need of recovering your breath after shouting. The faster you shout, the faster you will have to recover your breath. Up to a point, because however fast you would like to shout, there is a limit that you cannot ignore. Imagine now that it is the neuron doing the shouting in reaction to the electrical torture you are subjecting it. Which message do you suppose you are sending to the neuron, besides the strength of the electrical stimulus itself? What kind of message would the neuron need to encode in such a situation? The strength of the stimulus? It is already reacting to it by shouting accordingly.
This is not the conclusion our authors think they can draw from the study of the refractory period: "Thus, even though action potential amplitude obeys the all-or-none law [The curve of action potential remains the same for every stimulus above threshold] and does not reflect stimulus intensity, the phenomena of threshold latency, and refractory period do indeed allow the encoding of stimulus intensity as a frequency code in the action." p.56. (original emphasis)
This, once again, only proves that we, as external observers, can reconstruct the intensity of a stimulus from the frequency of the firing in the axon. But to be a neural code, it must be usable, and used, by the target neuron.
It is evident that the target neuron will react differently to a single electrical burst, than to a continuous current. But how could that be construed as the reaction to a code instead of a direct reaction to a stimulus? Besides, we are the one sending the message, and it is not a very complicated message. We turn the power up if we want a higher current to flow through the neuron, and down if we don't. No other kinds of messages can be sent to a neuron, since neurotransmitters do nothing but make such a stimulation possible.
Let us stand still by this point: all the possible messages we can send a neuron are fluctuations between a minimum and a maximum electrical current, and for these fluctuations to become a code, neurons must react differently to different fluctuations. But the only difference that we have been able to observe are different reactions to
- sub or supra threshold,
- speed or firing rate, in the limited sense that to every release of neurotransmitters corresponds a spike in the firing rate.
All neurons react the same general way to a release of neurotransmitters or electrical stimulation (with a corresponding action potential). Unless we could refine the reactions of neurons to the point where we would find that some neurons react to, say, 5 spikes, but not to 3 or 7, while others do exactly the opposite, we must conclude that the frequency of firing is neither a message (unless we understand with message a mere stimulus) nor a code, and that, if it is a code, then it is a code that the brain has already cracked, since a higher stimulus gives rise to an intenser sensation.
Accommodation, habituation, and other mental phenomena
We all have heard of the difference in reaction to different stimuli when considering behavior in general, but is particularly edifying when linked with the intrinsic properties of neurons, or what are thought to be such properties.
Stimulating lightly a sea snail like Aplysia, will provoke first a withdrawal reaction, but if we keep it up, authors in general, and not only Levitan and Kaczmarek, suddenly change their language and revert to psycho-babble. They call it habituation, which comes not from the English word habit, but more from the French adjective habitue [with an accent on the final e], meaning that the snail has gotten used to the stimulation, and does not feel threatened by it anymore.
But such a hybrid concept hides the fact that we have left the field of chemistry and entered the field of (animal) psychology. It tells nothing new about the neurons involved, and all about the way the animal reacts to different situations.
Neurons, Action Potential and the Brain
Photo-receptors: where is that color sensation now?
I can be very short on the subject, with the benediction of Levitan and Kaczmarek: "For all the subsequent steps of transduction [the process of transformation of light sensitive elements to an action potential], it is useful to think of this molecule as analogous to a receptor that has just bound its neurotransmitter." (p.355) (my emphasis)
They continue by specifying that opsin, as the neurotransmitter in question, is very similar to other neurotransmitters found in other parts of the brain. That means that we have absolutely no reason to treat transduction differently from other neuronal processes, whereby a neuron secretes neurotransmitters or hormones, that open ion channels in the target neurons, which then experience a change in electrical balance and a subsequent action potential.
What is certainly important for our theme is the initial reaction of the photo-receptor to light. I will limit myself to cones, even if "[i]t is rods, however, from a variety of species, that have been the favored cell type for studying visual transduction." p.353
I have already mentioned the studies of Graham&Hartline (1935), and those of Hofer et al (2005). Let me add that their results are really not surprising at all if one considers the graphs of the relative sensitivity of the different cones, information that is found in every elementary textbook on vision, and that has been known for a very long time. These graphs show that each type of neurons, usually called L, M and S cones -to indicate that they are most sensitive to Long, Middle and Short wavelengths- are not exclusively sensitive to optimal stimulations. They will react faster to stimulations in their favorite range, but will also react, if that favorite stimulus is not present, to any kind of stimulus strong enough to penetrate their barrier.
This means, that just like the amplitude of a stimulus in non-visual (or non-auditory) neurons, the reaction of the receptor will be an all-or-none response, with this fundamental difference that an external observer has no way of reconstructing the spectral nature of the stimulus, based on the firing of the receptor. All the frequency will tell him, is how strong the light stimulus was. That of course would give the scientist a general indication, if he already knows the lighting conditions in which the response took place! Without this knowledge, our researcher will remain (pardon the pun) completely in the dark as far as the nature of the stimulus is concerned. He will be, to state it clearly, unable to say if the neuron reacted to a green, red, or blue light!
We can therefore say that no neural trace is ever created of the color sensations an organism experiences.
That is the conundrum that every vision scientist has to face, and it is therefore not surprising that they all act like it does not exist. This is a luxury they should not be afforded anymore. Hiding behind mysterious neural codes that explain away their incapacity of dealing with painful facts should be considered as even less acceptable. The number of graduate students that have become "doctors" in their field because they were able to solve a problem with mathematical and statistical means, and then claimed that the brain had to use a similar solution, is staggering. They should be asked to show how they thought the brain was doing that, and vague observations about the computational capacities of the brain should be rejected without hesitation. There is of course no reason to refuse them a title in mathematics or statistics.
Neurons, Action Potential and the Brain
Intensity of a stimulus
We have seen that this property has a direct effect on the quantity of neurotransmitters released, but that it is otherwise unused. The only way the brain could keep track of it is by, somehow, registering the firing pattern besides merely using it to modulate a neuron's response. But if it could do that, then we would have our neural code in all its glory. So, we are back to square one. Unless we could point at a type of neurons whose only function would be to register this frequency. In other words, it would have to be a mirror image of the firing pattern, without itself doing anything else but reproducing this pattern whenever approached by other neurons. Since precise coding would be out of the question for practical reasons, we could have a form of sparse, or even crude, coding of a few intensities. But even with only two intensities (high and low), the extra number of neurons needed for all types of sensation, and not only visual sensations, would be very significant. Not only that, each memory of a sensation would have to be connected to such "intensity coding" neurons, which would land us very soon within astronomical range.
But is such neural coding possible? Let me state that there is no way, as logicians would say, to prove a negative: it is not possible to prove that the brain does not use any form of neural coding.
Still, we seem to be able to remember, at least vaguely, the lighting conditions associated with many memories. We can also remember the look of a colleague when he came back tanned from a vacation, and compare it with how he usually looked in the winter months. But is that a matter of sensation of intensity, or is it more simply that we remember different colors altogether? This is how Hering put it almost 100 years ago (probably longer, but I am taking the publication date as a reference).
"Die Farben sind es, welche die Umrisse jener Gebilde ausfüllen, sie sind der Stoff, aus dem das unserem Auge Erscheinende sich vor uns aufbaut; unsere Sehwelt besteht lediglich aus verschieden gestalteten Farben, und die Dinge, so wie wir sie sehen, d. h. die Sehdinge, sind nichts anderes als Farben verchiedener Art und Form." (Ewald Hering, "Grundzüge der Lehre vom Lichtsinn", 1920) p.4
In short, everything is color.
Such a conception has the advantage of beauty and simplicity, while lowering the necessity of a neural code at the same time. And that is all that we can ask of any theory concerning neural coding.
Last, I would like to point at a curious paradox: a fundamental property which can be redeemed by scientific analysis (intensity), does not seem to be used, or hardly, whereas another, as fundamental, color sensation, that is used in all aspects of our perception, cannot be reconstructed with the current scientific methods.
What does it mean for the brain that there is no neural trace of color sensation?
To be honest, I really do not know. I suppose that the first priority would be to try and falsify this affirmation. Which would be probably a very fruitful undertaking, as long as one does not revert to cheap assumptions with no empirical underpinning.
[I would certainly not want to accuse Dacey ("Parallel Pathways for Spectral Coding in Primate Retina", 2000) of making cheap assumptions, but it remains a fact the whole of his argumentation is based on what he is supposed to prove: that there is somewhere a neural code for color, and that it is only a matter of finding the right combinations of neurons to list all theoretical color sensations.]
Second, if taken seriously, this fact will be used differently by different philosophers, depending on their own beliefs and convictions. That is also a very rich road that could bring up interesting possibilities. No one can pretend possessing the only and absolute truth, and such a pluralism in philosophical approach can only be welcomed.
More important is the necessity of taking into account this irremediable fact: we cannot explain neural processes by chemical means only. Mental facts should not be considered as a convenient way of confirming so-called scientific analyses, but must be present at every step of the way as necessary components of brain processes. That will take some getting used to, and vigilance will be necessary to avoid an easy relapse into the old ways.
Neurons, Action Potential and the Brain
The concept of neural divergence is very ambiguous. After all, we could consider divergence as the rule, rather than the exception. We could say, every time multiple neurons converge on a single one, that that neuron has diverged into all those neurons. Still, I could not say if such a neural phenomenon can be considered as a normal pattern in the brain. In other words, I think that the idea that the same neuron can be linked to more than one other neuron only makes sense when dealing with (memory or mental) associations, whereby different parts of the brain are using the same input. It strikes me as superfluous and wasteful of resources when applied indiscriminately, especially in the case of the retina where the number of optic fibers, in comparison to the number of receptors, is quite limited.
But here I must admit to a very frustrating defeat: I could not find a single, unequivocal, anatomical proof of divergence of foveal cones to two or more bipolar cells. A question asked on Helga Kolb's site, Webvision, did not yield a conclusive answer. I can only conclude that the conviction that cones are connected to two bipolar cells, one On, one Off, and that each of these is itself connected to a ganglion cell of the same persuasion, seems to be based on circumstantial evidence.
Still, the consensus is overwhelming, and I can no more than express my misgivings.
The double 1969 article by the Downing and Werblin duo ("Organization of Retina of the Mudpuppy", part 1 "Synaptic Structure", part 2 "Intracellular recording") is the worthy follow-up to Hartline (1935) concerning On and Off neurons. Cited almost 1300 times, its influence after all those decades is still undiminished. I will skip the second part in its entirety, even if is very interesting reading, for the simple reason that is is a typical Hubel and Wiesel kind of article, with all their weaknesses as I have shown in other threads:
1) Wide light stimulation (100 microns) makes it impossible to stimulate a single receptor and therefore determine the source of the response;
2) use of receptive fields makes the determination even harder;
3) use of light stimulations to determine the responses of neurons (horizontal, bipolar, amacrine and ganglion cells) that are themselves not light sensitive.
And of the first part, I will mention only one point: the connection of cones, especially foveal cones, to two bipolar cells, once through an invaginated synapse, and once through the base.
I would like to run down the consequences of this baffling divergence.
1) Duplication of a stimulus
It is generally agreed that photoreceptors produce a single neurotransmitter. The distinction in On and Off bipolar cells is therefore not their doing, it is something that is intrinsic to each type. These types have been identified by their receptors to the same neurotransmitter (glutamate). They are either metabotropic (On) or ionotropic (Off). This is as far as I am willing to go concerning chemical details, for the simple reason that we do not need more than that.
Since these two different reactions will take place each time a cone is stimulated, we can simply speak of the duplication of the stimulus through an ON and an Off pathway.
2) The costs
Duplication means halving the connection possibilities of the retina, or at least of the fovea to the brain. We can use only half of the foveal optic fibers available to represent the foveal visual scene since each each point in that scene (as activated receptor that gets through to the bipolar level) will be represented by 2 fibers.
Then we have to discount the fact that not all the fibers in the remaining half can be active at the same time, since about half of that half is composed of Off neurons that are turned off by light, and vice versa. That brings us to 25% of the number of optic fibers theoretically available for foveal representation which can be active at the same time.
3) What for?
Here again, the magic word is contrast. I think I have said enough on this subject already, and you will pardon me if I will not repeat myself.
4) A mystery unsolved
I have no way of infirming the affirmations of Downing and Werblin concerning the existence of a double connection of cones to bipolar cells via an invaginated synapse, and a flat, base connection. Since I agree with the authors, and everybody else, that the receptors are not capable of choosing which of the bipolar cells will be activated, and that the activation of both creates many (theoretical) problems, I can only hope that further research will bring with it an unequivocal solution to this mystery.
But what we certainly do not need are declarations of faith like the following: "The process of splitting images into multiple components tuned to selective visual features begins with differentiation of different photoreceptor types but is then greatly elaborated at the synapses between photoreceptors and bipolar cells." (Nelson and Connaughton, "Bipolar Cell Pathways in the Vertebrate Retina", Webvision). These lines (and to the defense of their authors, similar lines are to be found in countless other articles concerning the brain in general, and vision in particular) suggest that, long before retinal signals have reached the brain, or even the optic nerve, that the receptors already know they are dealing with images, which they then split into parts which are then processed differently by different parts of the retina. We have left the field of science and entered the realm of miracles. A position the authors, I am sure, would be horrified to hear attributed to them, and which their professional achievements certainly do not justify.
Neurons, Action Potential and the Brain
Let us say the visual scene consist of 1000 points than can be represented by 1000 optic fibers (as a theoretical maximum of what a retina could achieve).
We are allowed to represent only 500 points, since we need 2 fibers per point.
We are still using 1000 photoreceptors, which are divided in 500 Off and 500 On ganglion cells (to keep things simple). Each group can represent a maximum of the same 500 points, but not at the same time.
So, I am not sure I am technically justified in concluding that only 25% of the theoretically available capacity is used. After all, each time 500 fibers are used. But they are representing only one aspect of the visual scene, the On or Off aspect.
I will leave it to statisticians to devise the right formula.
It makes you also wonder when the contrast function can come into play. After all, On and Off neurons have to be active at the same time if they want to be able to enhance contrast at all. But that brings us in conflict with the respective properties of On and Off neurons. An extra complication for this model.
Neurons, Action Potential and the Brain
I would like to close this thread with the following considerations:
Interneurons as nano-computers
The problem any massively interconnected and distributed system has to face, like we can safely assume every brain is, is how to prevent all connections from lighting up at the same time, whatever the trigger is. Since, ultimately, every neuron is, directly or indirectly, connected to every other neuron, seeing, say, a red flower, will not only activate every memory we have of red flowers, but also of everything red, every flower, and the different colors will in turn activate corresponding memories, and so on, almost ad infinitum.
There are at least two ways for solving this problem:
1) Spatial limitations: divide the brain into more or less distinct parts, and make every trigger local, leaving specialized pathways for interstate communication;
2) Tag memories chemically to activate only those you want when you want them.
And of course, a combination of 1 and 2.
Both solutions are strongly in need of a homunculus that oversees the whole situation and can decide which memory belongs with which, and which tag should be used in which situation.
Can we get rid of the homunculus?
It has experimentally been shown that going, for instance. from the living room to the kitchen makes remembering certain tasks more difficult. This a very clear application of the first rule. Such a rule cannot of course be absolute, otherwise we would forget everything that happened to us before we entered a new space. And that is exactly the difficulty that we are facing in determining the scope of this rule.
We can of course always imagine that recent memories have a different chemical signature than old memories, and that associations work according to a chemical schema (as seen externally through the eyes of a scientist) whereby newer memories get activated first, and the older a memory the longer it takes to get activated chemically. We do not need to assume the existence of a homunculus, all we need is to find the relevant chemical signatures and processes (Kandel, "Memory: From Mind to Molecules", 2008).
If we could do that, it would probably make the second rule superfluous and unnecessarily complicated since I have no idea how to apply it, differently from the first, without the help of a homunculus.
Another advantage of such a chemical solution is that it resolves the dilemma between local (activated by our presence in, say, the kitchen), and global memories. The locality principle still remains interesting and understandable from an evolutionary standpoint. After all, proximal space is where imminent dangers, and opportunities, arise.
Interneurons are supposed to do just that, choose between between different associations and inhibit others. That is why they are often considered as a paradigm of the computational capacities of the brain. Crandall&Cox ("Local dendrodendritic inhibition regulates fast synaptic transmission in visual thalamus", 2012; see also Szydlowsky et al "Target Selectivity of Feedforward Inhibition by Striatal Fast-Spiking Interneurons", 2013) do not hesitate in calling them "multiplexors, containing numerous independently operating input-output devices".
They built a very exciting experiment in which they showed that exclusive stimulation of a dendrite had local effects. This means, according to them, that interneurons are capable of choosing their output without activating the whole cell, and all other dendrites.
That is exactly the possibility we were talking about earlier, and it seems to be confirmed by this experiment. The problem, as is often the case, lies in the interpretation of the results.
The authors are considering such a neural circuit as a standalone system that can take independent decisions based on the input it is receiving. That is completely understandable, and if such a process were not possible in vitro, we would have to explain how such a function is possible in vivo.
But we must no forget that the problem we were facing was what to activate, and what not. And that is a choice the researchers have taken out of the equation. The problem was not, even if it has not been resolved yet, what to do with the input once we have given it the right tag and right location, but how to determine the nature of that input, and where to "put it in".
As far as the localized response of the interneurons (of which I cannot say anything meaningful, it being a technical matter I hardly understand), and their functioning as nano-computers, I think that once the problem of the input has been solved, chemical processes can go their own way without the need of foreign concepts polluting the lake.
Neurons, Action Potential and the Brain
"Memory: From Mind to Molecules", 2008, is from Squire and Kandel. See also, from Kandel only this time "In Search of Memory: The emergence of a New Science of Mind", 2008. These references should not be seen as an endorsement of the philosophical views of the authors, Kandel especially.