Back    All discussions



This thread has been abusively deleted. The Philpapers Team offered me the opportunity to restore it.

"How many threads do you need to restore? Combining multiple posts into one would be a way to get around the limitation on 2 posts, and would also be less work for you. Since they were previously accepted, we'll make sure to accept them if you notify us ahead of time with the subject heading." The PhilPapers Team


1 Hearing

George: I hate ears!
me: You sound just like Hate-Smurf. You did not mind the semi-circular canals, so why hate ears?
George: I don't mind the canals, in fact I love them. They're really fun. Any time you move your head I get to whoosh from one pool to the other!
me: And riding the basilar membrane, that's not fun? That's just like a trampoline, isn't it?
George: Yeah, I guess it is. Or at least it would be, if I could play the drums instead of having them bellowing in my head!
me: I don't understand.
George: Of course you don't. You're not supposed to. It's a homunculus thing, and you're a scientist.
me: Hey! I'm not a scientist!
George: yeah, well, don't say it too loud. Not if you want them to take you seriously!

Some References:
- Mersenne: " L'Harmonie Universelle", 1636;
- von Helmholz: "On the Sensations of Tone as a Physiological Basis for the Theory of Music", 1862/1954;
- von Bekesy: "Experiments in Hearing", 1960;
- Bauert: "Spatial hearing", 1997;
- Beament: "How We Hear Music : The Relationship Between Music and the Hearing Mechanism", 2003;
- Gelfland; "Hearing: An Introduction To Psychological And Physiological Acoustics", 2004;
- Schnupp et al: " Auditory Neuroscience: Making Sense of Sound", 2011;
- Heller: "Why You Hear What You Hear", 2013;
- Moore: "An Introduction to the Psychology of Hearing", 2013;
- Langner: "The neural Code of Pitch and Harmony", 2015. The Physics of Hearing

Textbooks on Hearing traditionally start with the physics of sound, which is as a matter of fact quite different from the physics of hearing. Bauert (1997) seems to be one of the rare exceptions. He makes the distinction between a sound source and an auditory event, and refuses the easy amalgam of the two. The first one is a physical phenomenon that can be studied scientifically and can be the object of (dis)agreement between different individuals; the second is a personal sensory experience that can not be shared directly: what you hear and what I hear are not necessarily the same thing.
There must be of course a link between both, and the question then is which. The distinction Bauert makes does not stop him from following in the footsteps of his predecessors. He bases his analyses on the same facts as they do, even if the nuance he brings in his approach allows him to sometimes discern some discrepancies in the way certain experiments have been set. Some authors who do not make this distinction show results that are ambiguous because it is not clear whether they apply to one, the sound source, or the other, the auditory event. Ears and hearing can easily be compared with the way a __telephone __works. 
What I find particularly instructive is the fundamental difference in approach between acousticians and hearing aids engineers. While the first group concentrates on the mathematical description of waves, the second made it their main task to understand sound as a serial phenomenon. Cochlear implants are, just like the telephone, based on the translation/transduction of waves into electrical events. Those events cannot happen in a parallel way. After all, there is only one membrane to receive those impulses at any time.
The smallest bones in the body, the ones on which George would love to sit and pound on the tympanic membrane using hands and feet at the same time, seem to be relegated into obscurity while they are in fact the most important factor in the whole process. They are the first relevant phase in the sound production process.
Not that the__ Ear and Head Transfer Functions__ are not important. The way sound propagates from it source to humans is indispensable knowledge for sound engineers, be they designers or sonic pollution fighters. 
Nonetheless,_ it is knowledge that is inaccessible to the hearing brain_, and as such, irrelevant to the physics of hearing, this in contrast with the physics of sound.
Strictly speaking, even the role of the ossicles could be considered as irrelevant: all we need is the effect of their actions on the membrane, and its role in activating the auditory neurons. From serial to sensory:
Vision, as far as we know, is definitely a parallel process. We do not have the time to "look at" each element of the visual field one after the other. Hearing, even though it seems to us like we hear many sounds simultaneously is just the opposite. Sounds that reach the tympanic membrane make it move, and it makes in turn the ossicles pound on the oval window one hit a time. The number of hits per second, their frequency, will determine the reaction of the basilar membrane, the last stage before neurons take over.
How many hits does the basilar membrane need to know how to react? More than two? The frequency is set by the first two hits, all the others are only needed to indicate the duration of the stimulus, not its quality. But are two hits physically sufficient to make the membrane vibrate at the right frequency, or do we need many more than that? This is quite a serious question. After all, if the stapes are supposed to react fast enough to incoming sounds as to give us the illusion of simultaneity, the less the number of hits needed for each sound, the better. Puzzling phenomena like the missing fundamental, or the suppression of the second sound, might be more easily explained when taking into consideration the seriality of sound sensations.  There seems to be only__ two fundamental physical properties of sound: frequency and amplitude__. Everything else follows from those two. Likewise, those two properties seem to be the only properties that can be relayed to the brain. Which begs the question not only of pitch and timbre, after all, those are only sensations, and we are used to the idiosyncrasies of our brain. No, the mystery resides in the fact that these physical properties seem to hide a third one, as tangible as can be: the spatial location of sound. We very often know where a sound comes from. Even assuming imperfect  knowledge based on experience, it remains a mystery how we could know that. Where are the spatial codes hiding? The last puzzle (for which I have no solution) until now is the convergence of so many neurons (around 20) on the same inner hair cell.

2  Sound Source Localization
The most striking aspect of all the experiments concerning localization of sound sources is their artificial character. Not only are subjects very often restrained in their searching (head and eye) movements, they are also generally expected to localize a single one-time sound of limited duration. The results after more than hundred years are very eloquent: nobody agrees with anybody, to put it somewhat dramatically.
There are of course some trends recognized by all, especially the fact that humans are much better at localizing sounds that happen in front of them. As soon as sounds are produced from above the horizon, above their head or behind them, results become quite erratic, depending on the way each experiment has been set up.
Also, the distinction between left and right seems to be made quite accurately within a reasonably wide range. 
Last but not least, results seem to differ from one individual to the other, making general conclusion even more hazardous. A very interesting article is that of the long winded title by Rankin et al "An Assessment of the Accuracy and Precision of Localization of a Stationary Sound Source Using a Two-Element Towed Hydrophone Array" (2013) in which they report of their attempts... Well, the title says it all. Let me just add that it happens on the open sea and that the sounds are produced by underwater transducers while the towers are towed by a ship.
I found the summary of their findings in the excellent abstract [something that happens alas very rarely] very insightful: 
"Bearings to the sound source were determined based on the time-delay of the signal arrival at two of the hydrophone elements. Localization was estimated by visually inspecting the convergence of bearing angles to the source. The accuracy of bearings and localization improved as the ship approached the sound source." Here are thekey words as I see them.
- different arrival times at the towers are the main acoustic source.
- The sounds are localized after a whole series of recordings and computations.
- visual analysis is used to implement acoustic findings.
- precise localization is quasi-impossible: only a closing approximation of the source is feasible. Except for the first one, all those points show how unrealistic most experiments in this field are. It is no wonder that it seems impossible to get any conclusive results. If hearing, for humans, is a secondary sense as far as spatial localization is concerned, if it is meant therefore to be supplemented by vision and experience, than the researchers' expectations may be just too high. The question is, would the same image remain plausible once we take into account the first point? Concepts like Interaural Time Difference (ITD or arrival time of the sound in respectively the left or right ear), and Interaural Level Difference (__ILD__ or difference in sound intensity as experienced by each ear, also called IID for Intensity. See Popper et al "Sound Source Localization", 2005) are fundamental concepts in this context. They are also typical homunculus concepts. They assume that the brain can compare two input streams and then take the appropriate action. Which would mean that the brain is able to "look at" both inputs in a neutral, analyzing way, just like we are able to in everyday life or in a scientific experiment. That is the difficulty with homunculus illusions. They seem so plausible. After all, we are capable of comparing two sounds, so our brain must somehow be able to do that also. Hearing sounds is a sensory process, analyzing them is an intellectual one. Some theories, starting with Helmholz, tend to consider the boundary between both stages as quasi inexistent. But it really becomes a problem when the intellectual process seems to precede the sensory one. That is if we somehow seem able to determine, even before we hear the sounds, what the difference is between them. Which seems the only way we could assign to each sound its place in our experience.
If the experience of the difference between two sounds does precede its analysis, then there is no problem with considering the intellectual process as fundamental in processing these inputs. It becomes a normal theoretical debate.
It remains however very problematic to assign such intellectual processes to all animals or even humans (babies, children, illiterates, mentally challenged...). ILD and ITD concepts are very similar to that of binocular disparity. As I have tried to show, it is very unlikely that parts of the brain are able to compare what each hemisphere is receiving. This is something only the whole brain or organism can do. Still, whatever intellectual processes we believe are involved in the process, they have to work with whatever our sensory organs provide us with, and the corresponding sensations. _Once we accept the idea that we are somehow "aware" of a difference in arrival time or intensity in one ear relative to the other, we can ask ourselves whether the brain really needs anything else to determine the direction of the sound source. _Would the sensation that a sound has reached our left ear before our right ear not make all further computations redundant?__ ILD and ITD seem to me as the logical consequences of a mathematical/computational approach that denies the existence of sensations and must therefore seek refuge in complex calculations to explain what the brain can do so naturally. What does Tastee Wheat taste like?
The people in "The Matrix" had no way of knowing that, just like congenially deaf people have no way of knowing how voices or sounds in general are supposed to sound. Still, they are able to learn with the help of cochlear implants. A Youtube clip of a young child, under seven, shows him as having a conversation with the interviewer and other people present while continuing to play with what looks to be Lego-like blocks. Quite a technological and human achievement that would have been unthinkable a few decades ago.
There is no doubt that these implants will keep being improved with each year passing, a very positive development for people all over the world. A Fundamental Flaw?
I hope to convince the reader that I wish in no way to belittle the progress made in this area, but I am myself convinced that the approach to cochlear implants, however successful it might seem, is fundamentally wrong.
The following quote from Clark "Cochlear Implants: Fundamentals and Application ", 2003, is representative of the current approach:
"Cochlear implants should aim to reproduce the coding of sound in the auditory system as closely as possible, for best sound perception." (ch.5). (my emphasis) There was once a documentary on TV concerning a tribe that honored their elders by chewing their food for them when they lacked the tooth to do it themselves. This is the impression I get when I read this quote. The implants are doing the work that the patients auditory system is supposed to be able to do unaided . 
The fact that with only a very limited number of electrodes (about 22 according to the 2005 edition) relative to the 20.000 inner hair cells patients are able to learn so much is truly astounding. I will come back to the theoretical significance of this ratio in a little while. First I want to stand still by the question of pre-coding and its disadvantages.
The makers of cochlear implants have to remove the basilar membrane in its entirety if it is still present. Unluckily, instead of trying to emulate its effects, the technicians have followed the erroneous views of theoreticians of brain processes in general, and those of the auditory system in particular.
Researchers concentrate their effort in ameliorating the speech-processing abilities of their implants while they should be directing all their effort towards a better delivery of sound in the form of electrical impulses. In plain words,they should be trying to replace the effects of the basilar membrane instead of creating an auxiliary auditory system outside of the brain. Number of electrodes
The ratio of electrodes and inner hair cells necessary for a working hearing system would seem to indicate that just like with vision, each auditory fiber is capable of relaying all, or at least many sound sensations. Also, technical results show that applying serial instead of simultaneous sounds give better hearing results. This is in accordance with the serial nature of the stapes and therefore of the acoustic data that impinges on the basilar membrane, and from there on the auditory fibers.
Put very simply, implant technicians need "merely" to transduce the sounds received by the microphone into electrical impulses and relay them through as many electrodes as they can to as many locations in the cochlea as possible.
 I am not even sure whether they should bother with the same spatial distribution frequency-wise as seen in the natural basilar membrane. It could well be that the distinction between high (at the base) and high frequencies (at the apex) is related to the mechanical properties of the membrane, more than to the processing abilities of the corresponding neurons.

In summary: get the sound (impulses) to the auditory system and let it do its work. 

3 The homunculus explained

me: George, would you mind?
George: You know you are asking me to reveal secrets that have been kept, well, secret, since the dawn of Man? Gosh! I have always wanted to say that. Just for allowing me this opportunity, I think I will help you out a little bit.
me: Much obliged.
George: Now you sound just like Elvis.
me: would you get on with it!
George: Okay! Okay! Imagine you are listening to a sound that is reaching both your ears, but not at the same time.
me: ILD or ITD?
George: Whatever. What I do is take what you hear, not me, since they are your ears, if you catch my drift. [George giggles!] Then I decide which sound arrived first, and what they look, I mean, what they sound like, and then I whisper these differences in your ear. I mean in your mind.
me: How can you do that?
George: Easy. Wait! Do what?
me: How can you know the difference if you don't use your own ears?
George: Helloo! Because I don't need to! Homunculus? Moi? Remember?
me: George, you don't think you have a little George in you, do you?
George: This is really sick, you know! That'll teach me to... Teach humans! 

4 Calculations and Sensations
I have repeatedly hammered on the necessity not to lose sensations out of sight because all our calculations not only are based on them but, eventually, point back to them. Ignoring them compels the researcher to look for artificial, biologically implausible artifacts that would explain those calculations. Sound is no exception, or rather, it is a beautiful illustration of the intimate relationship between calculations and sensations. Why the Basilar Membrane?
After all, the percussions of the stapes on the oval window are what it is all about.
The problem of course is for the brain to interpret serial, telegraph- or Morse-like streams of data, and turn them into a meaningful whole. Each series has to be distinguished from the next, and for that the brain would need a neural decoder which it does not have. This is where the basilar membrane comes into play. Sounds are distinguished through their respective frequency without any computation or decoding process. Physical and mechanical laws take care of that. The membrane reacts differently to different rhythms. Just like the jumping rope of little girls reacts differently to each wrist and arm movement. The rope does not need to compute all those variables before it can react accordingly. And neither does the basilar membrane when it reacts to a seemingly meaningless stream of percussions. The basilar membrane is seen by all as a biological frequency analyzer, or even a Fourier analyzer by those who need the mathematical security blanket.
One percussion moves the membrane in an indeterminate way, but the second one defines the final reaction, the one following either continuing the same pattern of breaking it.
The movements of the basilar membrane activate different neurons according to their specific patterns. The details are, for now, irrelevant. What counts is the following paradigm: series of percussions --> membrane movement --> neuronal activation Still, membrane movements can be considered as continuous, and even when they are not, the brain need some means of distinguishing one series of movements from the other.
This is where we realize that we cannot do without the sensations. A percussion or movement pattern evokes a definite sensation depending on its frequency. The brain does not need first to determine a frequency to feel the corresponding sensation. In fact, once the sensation, and its intensity, has been felt, there is no need anymore for the brain to record any of those two. And even if it wanted to, that is if we decided that brain must somehow code for frequency and amplitude, we know that it cannot. There is nowhere in the neuron where such codes could be hidden. A second series of percussions will evoke another sensation, and that is the way the brain can differentiate between different sounds. It does not need to know when a series ends and another one starts. It just feels it. The same way that the little girls' rope just knows when one of them is ready to jump. Or is it the other way around? Can little girls compute the frequency of the rope before deciding when to get under it and jump? Do our researchers have some equations hidden at the bottom of a drawer somewhere that would explain such wondrous abilities of our children?

5 Calculations and Sensations (2)
The cocktail party effect has created many insoluble problems for the researchers.. Bregman's  "Auditory Scene Analysis : The Perceptual Organization of Sound", 1994, is considered as a classic in the field. This is how the same author describes the problem of hearing in his foreword to a more recent work  by Divenyi, editor of  "Speech Separation by Humans and Machines", 2004: "There is a serious problem in the recognition of sounds. It derives from the fact that they do not usually occur in isolation but in an environment in which a number of sound sources (voices, traffic, footsteps, music on the
radio, and so on) are active at the same time. When these sounds arrive at the ear of the listener, the complex pressure waves coming from the separate sources add together to produce a single, more complex pressure wave that is the sum of the individual waves. The problem is how to form separate mental descriptions of the component sounds, despite the fact that the “mixture wave” does not directly reveal the waves that have been summed to form it." 
(my italics) Understood as a non-serial series the numerous sounds that reach us seem to be all part of a sonic soup we are supposed to drink with forks. 
[I agree with George. It really sounds terrible!] This of course does not mean that sounds cannot be scrambled beyond recognition. But to think that the brain has a way of filtering different frequencies beforehand is a homunculus illusion. 
Attention and concentration are processes which still await a plausible explanation as to how they are able to enhance certain sound streams above others,  it remains nonetheless the question whether they do that with the help of frequency filters, therefore at the source, or at the end of the acoustic process: after different sounds have been perceived. 

6 Calculations and Sensations (3)
The conundrum faced by Bregman and the different participants to Divenyi's (2004) is I think insoluble mathematically without assuming previous knowledge of the different sound sources contributing to form a specific spectrogram. In other words,__ The mathematical approach either relies implicitly on a homunculus, or it admits it is powerless to analyze sound events in real time.__
That certainly does not mean that a scientific analysis is therefore impossible. Taking example of the way biological hearing systems have solved the problem we must acknowledge the fact that a technical, or technological, solution is necessary instead of brute mathematical computations. What mathematics cannot do is quite elementary for an artificial membrane and corresponding stapes.
It would certainly not be the first time that physical manipulation took the place of computation. Just think of the way transistors are built. Computers would not exist if not the peculiar reaction of two physical surfaces put in close relationship to each other (Levinshtein&Simin "Transistors: From Crystals to Integrated Circuits", 1998). Man does not create electricity, he makes it possible by combining matter. We much too easily forget this fundamental phase in which electrical currents suddenly seem to surge from nowhere just because two different molecular configurations were allowed to interact with each other. Electricity is not a mathematical concept, even if all its manifestations can be expressed mathematically. It is a real physical phenomenon.
The same way, maybe frequency, and amplitude, are more than mathematical parameters, but also real physical attributes of physical events. If that is the case, physical manipulation might be the only way to distinguish them in a complex environment. Maybe we need the physical vibrations to further analyze sounds mathematically, and not only at the start of the analysis.
We most probably cannot, armed solely with the mathematical concepts, distinguish one frequency from the other in a noisy environment. But a physical device might do just that.
Too often have scientists thought that whatever a device can do could be reduced to a set of mathematical equations or algorithms. This is certainly the deep conviction of Rodney Brooks ("The relationship between matter and life", 2001) in his quest of creation of intelligent, embodied robots. He rejects categorically the idea that the way to better AI is through the discovery of "new stuff". I wholeheartedly agree with him. But not the way he would hope. What I am proposing is in fact very "old stuff": matter. I will leave the mind equation out for the moment. Suffice to say that not all brain or even computation processes can be solved mathematically of computationally. Sometimes the only way to understand them is to manipulate matter and take note of the changes.
In other words, maybe auditory scene analysis (ASA) or computational auditory scene analysis (CASA) are not mathematical or computational problems, but merely technical issues. [Bregman's approach is less mathematical than it is based on Gestalt principles. ]
It is ironic that the trend of inventing all kinds of mathematical devices for brain processes should have blinded most of the researchers to this elementary truth: the brain can solve complex problems with very simple means: before being mathematically reduced, Archimedes' lever had been used for thousands of years by non-scientists. No mathematical analysis a priori could have discovered it.
What is even more staggering is that the idea of an artificial membrane is far from new. Already Nobel laureate Bekesy in the 1930's had built models of a basilar membrane to study its reactions to sound stimulations. You would have thought that researchers after almost a century would have grown accustomed to the idea that a rudimentary physical device could do what mathematics could not (microphones for instance still need a membrane). Apparently, the ideological hold of mathematics and the computational view is just to strong to be ignored. Strangely enough, Brooks' approach is based on the idea that intelligence cannot be disembodied. The next step would be to acknowledge that manipulations of the physical world (beyond mere bodily actions in the world) are a fundamental way of gaining new knowledge. Just like the ancient Greek, we call that physics. [Brooks would of course not object to such a view.]
The question "Why a basilar membrane?" sounds hopefully even less theoretical than before: such a device was a physical necessity, and not only an evolutionary shortcut. [I apologize for speaking on Evolution's behalf. Nobody should be allowed to do that. The only thing that Evolution teaches unambiguously is that things change. The reasons how and why are of our own making.] This also shows the necessity for any scientific analysis to go beyond a mere mathematical description of brain processes and take the issue of their physical realization as even more fundamental than abstract equations than can be interpreted in so many ways. 

7 An empirical test
The question whether the stapes and therefore the basilar membrane, react serially to many different sounds, some simultaneous, some not, and how this affects the frequency and amplitude analysis of each particular sound; or whether they react to the acoustic garbage formed by all the sounds received within a certain period, can be considered as an empirical matter. It should be possible to answer the question unambiguously with physical experiments. And if it turns out that there no easy answers, that would be a very important result as well. 
In fact, we already have such empirical tests. Telephones and microphones do exactly that: they convert different sounds, of which many are produced simultaneously, in a serial series of sounds that we can distinguish.
Why would then the brain re-scramble that which has been so dutifully separated by the basilar membrane? Spatial codes of hearing
It sounds so strange when we think that sound knows only two fundamental properties: frequency and amplitude. The Ear and Head Related Transfer Functions are supposed to explain the metamorphosis of ubiquitous sound into directional sound. The changes that the external ear and the head bing about must explain how the brain is thought to be able to compute equations like the Minimum Audible Angle (Mills "On the minimum audible angle", 1958; see also Hartmann " On the minimum audible angle - A decision theory approach",  1989) and other equations that help localizing a sound source in space. 
But directional sound is something that we Humans cannot perceive [unless we step into the path of an ultra-sound wave. A search on the web will provide a wealth of technical information on the subject, and a plethora of commercial products that are supposed to enhance our experience. Fish are supposed to have directional hearing, but that is something I really know nothing about. See Fay and Popper "Comparative Hearing: Fish and Amphibians, 1999; also the ch.3 of Popper et al mentioned below).
Let us not dwell on the homunculus aspect of these computations and accept them at face value. Let us even accept the idea that those equations do not allow a perfect navigation instrument and must be supplemented by vision, head and eye movement, and experience in general. 
[I would like to remind the reader though that whatever effects the external ear and the head can have on how the sounds reach our brains, this knowledge is only available to an external observer. As listeners we have no way of distinguishing between a sound that has impinged directly on our inner ear, and one that had to literally pass over our head.] The fact remains that we are apparently able to recognize a sound after it has changed its aspect by moving to another location. We can worry later about how we do that exactly.
So there must be something in the sound itself that not only keeps it recognizable, which does not "sound" extremely improbable, but also that indicates its possible location. And that is exactly the problem, isn't it?
How could previous experience explain the fact that we recognize the same sound as having now another location?
1) We must recognize the sound as such, even if it has now somehow changed. No problem.
2) We must have previous knowledge of the location of the same sound relative to our body. For instance, we have previously identified a sound source in front of us, and had experienced the sound from different angles by moving around it. The identification of the different sounds as being of the same nature or origin does not necessarily mean that those sounds could be linked to the same source objectively by a sound instrument. Maybe their spectrum, Fourier shapes or whatever it is that sound experts like to play with, would seem completely unrelated to each other. The association of all the sounds to relative locations and sources is a psychological one, not a scientific endeavor.
This is how Fay expresses it in ch.3 of Popper et al (2005): "it is also quite clear that binaural processing is subject to several essential ambiguities that must be solved using other strategies. These include the fact that all sources on the median sagittal plane result in equal (zero) interaural differences, and that, in general, “cones of confusion” exist bilaterally on the surfaces of which interaural cues are equal for many possible source locations. These ambiguities seem to be solved adequately among humans and many terrestrial species that use head or pinnae movements, information from other senses, judgments of the plausibility (...) of potential locations, and a processing of the head-related transfer function (HRTF)
through frequency analysis..."
(my italics) Another interesting article is " Infants' Localization of Sounds within Hemifields: Estimates of Minimum Audible Angle", by Morrongiello and Rocca of 1990 where they show that human babies and children learn with time how to localize sound more and more accurately. Already in 1940 Wallach had pointed at the role of extra auditory cues in the localization process of sound ("The role of Head Movements and Vestibular and Visual Cues in Sound Localization"). All in all, sound localization seems to me be very similar to how we know how hard or how high to throw the ball to score in a basket ball game even though our brain has no way of computing the different trajectories. Still, experience cannot explain the sensations we have that a sound seems to come from a certain direction, however vague those sensations might be. 
Experience does not create corresponding sensations, only corresponding behavior or conceptions.  Where do therefore our auditory spatial sensations come from? If we want to avoid turning around in circles, we will have to point either at the sound itself, or at our own body as the origin of those sensations. Our brain, once again, cannot produce spatial sensations out of nowhere. Or so I will assume.

8  Are our ears symmetrically located on our head?
Not according to scientific measurements (Abaza and Ross"Towards Understanding the Symmetry of Human Ears: A Biometric Perspective", 2010). The difference is nigh undetectable, and it is really doubtful whether this minimal asymmetry can account by itself for our spatial sensations.
Still, combined with head and body movements, this small asymmetry might just do the trick. Spatial sensation might be nothing else but the impact of a sound on one ear before the other. This is I think much more plausible than the topological arguments used by Rauschecker and Tian in "Mechanisms and streams for processing of ‘‘what’’ and ‘‘where’’ in auditory cortex" (2000). I really have no idea how computations can create any sensation at all, and certainly not the sensation of "where", even if parts of brain (and not a human individual) could make use of geometrical equations. A view that does not cease to amaze me. Auditory Field and External Space
What would happen if the only factors allowing us to localize, approximately, sounds were ILD or ITD? All sounds reaching the right or left ear would seem like not only coming from the same direction, but also from the same location. What is strange is that sounds coming from a loudspeaker are in fact all from the same location. Nonetheless, we do not have the sensation that they all occupy the same space.  I think we make much too easily the amalgam between auditory space and real external space. Vision has spoiled us so to speak. What we see is also where we see it.  All Sounds at One Location?
Two sounds of different frequency will be felt at different locations. This the basis of the place theory as advocated by Bekesy. It seems to work quite well for certain frequencies, but stops being of any use at some fundamental frequencies like those used by speech and at very high frequencies.
The problem really is the fact that we still have on clue where the spatial codes can be hidden. How does the fact that neurons activated at specific places from the basilar membrane explain the spatial character of auditory sensations?
That the auditory field, like the visual field, moves with our own head is understandable.
But while we expect a perfect correspondent between the visual field and 3D external space, that relationship is lost when it comes to sound. 
Frequencies seem to have a location in auditory space that turns out very often to be completely unrelated to real sound sources.
We localize sounds in the objective external space by ear (ILD and ITD as sensations, not computations), but we hear different sounds at different locations in auditory space instead of throwing them all in a heap. Distinguishing them "audio-spatially" help us interpret them better [something indispensable for speech production and comprehension] even if it creates some confusion as to where they are in real space. Does sound explain hearing?
The basilar membrane definitely reacts to sound. But that is certainly not the case of auditive fibers. They are activated by hair cells which react, just like with touch, to movements of their cilia. So yes, hearing certainly involves sound indirectly, but maybe we should consider hearing more like a particular modus of touch. That would in a way explain why we feel sounds as belonging to an auditory space. Maybe what we are feeling is comparable to what we feel when something touches our skin, or our body hair. There is also always a spatial sensation accompanying the touch sensation. That would also explain why those spatial sensations do not give us precise knowledge concerning the source of sounds in the external space. They convey the spatiality of our cochlea, just like touch conveys the spatiality of our body as a whole. We feel where we have been touched, or where our skin itches. What makes such a view especially interesting is that it can perhaps be empirically falsified. This could be done by showing that there are no correlations between the objective position of the part of the cochlea admittedly involved in the perception of a specific frequency and the spatial sensation. 
I do not think we can hope to get clear-cut statistics that would show a definitive relationship between sensations and spatial position of parts of the cochlea. I am not sure the position of the different parts of the cochlea relative to the body would allow such unambiguous conclusions. But it might be possible to find some correlation between auditory sources and cochlea positions. In other words, that when we feel that a sound is coming from behind us, we are actually feeling a part of our cochlea that could be said to lie caudally, to the back, relative to where we are facing.
A negative result would not infirm the view that auditory spatial sensations are cochlea and not space related, but it would leave it a little bit dangling in the air, like any other speculative idea.
[I could not find any data concerning the localization ability of sounds among patients with a cochlear implant. All the efforts are concentrated around speech comprehension, music being a far second. But If am right, this kind of patients should have more difficulties in localizing sounds. The problem will be to to prove that it has anything to do with the number of electrodes. After all, if they can learn to recognize speech maybe the idea that each auditive fiber can convey all or many acoustic sensations can be complemented with the corresponding spatial sensations. In other words, it might be almost impossible to find definitive answers to all the questions we have. A pragmatic attitude, certainly concerning implants, might be our best bet.]

Hearing as a biological phenomenon and a laboratory experiment

Re-reading what I have written last year, I realize how artificial sound and hearing experiments usually are. The fact that it is simply impossible to arrive at any unequivocal conclusion concerning sound location, is in itself a strong indication of this artificial character. 
We very often hear sounds that we find very difficult to pinpoint spatially. That is certainly nothing new, even if it can mean death when stalked by a predator. There are no perfect defense mechanisms. Animals, and not only preys, are constantly on the lookout, turning their eyes, head or ears, this and then that way. They need to do that because none of their distance organs (vision, smell, hearing, what Ernst Mach called "telescopic senses") have a 360 degrees range.
Also, audible indications of danger are never isolated sounds, but are usually followed by others which give them meaning and direction.
So what, if we cannot decide whether a single sound has come from the back, left or right? The fact that participants in such experiments are usually immobilized may be scientifically sound. It is also the greatest weakness of those kind of experiments.

I am a computationalist, so naturally I disagree with your idea that much of the analysis of sensory information is done by "me" and not my brain.
But you raise a good point in your post, which is that analog computation is often only analyzable as a physical phenomenon.  It's odd how often the point is overlooked.  I think there's a tendency to assume that two phases of Alan Turing's career are closely linked: his work on logic and computation in the 1930s, and his work on AI in the 1940s.  In phase 1, he argued that "mechanical" algorithms as carried out by humans could be realized as literal mechanisms of a particular kind, digital state machines (now called Turing machines).  He went on to prove that anything computable by such a machine could be computed by a universal TM, an idealization of the now familiar programmable digital computer.  In phase 2, he and his friends from Bletchley began to think about using digital computers to duplicate or mimic human thought.  Later this would be called "artificial intelligence."

The easy association of these two phases have led people to assume that Turing somehow proved that human thought could be duplicated or mimicked by UTMs.  He never provided an argument for anything of the sort, unless you count the sort of thought experiments he played around with in papers like "Computing Machinery and Intelligence," his most widely read paper.  The kind of human thought he analyzed in the 1930s was humans carrying out "mechanical" algorithms.  His arguments and proofs from phase 1 are quite convincing (to me and many other people), but they simply don't address the question of humans carrying out day-to-day activities.  Most people never thought in the mode he discussed in the 1930s; people who did it for a living were called "computers."  No one thinks that way for a living any more.  Anyone who in the early 20th century would have needed the services of a "computer" now uses a computer to do it for them, but of course the word has changed its meaning, and now mean "electronic digital computer."

In spite of this disconnect, people tend to assume that you don't need any computational resources besides a digital computer (or perhaps an ensemble of them) to mimic the activities of a brain.  That's probably true, just because general-purpose digital computers are fast enough to perform an incredible number of computations per second.  But nothing says the brain has to consist of an ensemble of digital computers.  So why not just admit "analog computers" into the zoo as well?  Because (and this is where I agree with you) there is simply nothing to say about analog computers except that they're physical systems.  (Almost nothing to say.)  There is no useful notion of "universal analog computer."  (The phrase is used in the literature in a few different senses, some universal only with respect to the universe of standard op-amp analog machines, some universal with respect to a mathematical concept of computation with real numbers that is physically meaningless.)  

Nonetheless, people casually assume that any analog computation can be replaced by a digital computation, with a facile argument that the excessive precision of a digital computation can be disguised, by discarding bits, or adding noise or pseudo-noise.  What this argument overlooks (as, in a way, you point out) is the step of analyzing the analog device mathematically.  Once it is reduced to a set of differential equations that include all the first-order effects, that set can be transformed into an algorithm to compute what the device computes.

So, for example, any semiconductor device can be modeled with a digital computer.  But obviously a digital computer's semiconductor components are not implemented using digital computers!  (Again, as you point out (AYPO).)  In the end physical effects happen because ... they're physical.  Physics is the last stop on the causal-explanation railway, or so says physicalism, which I, and most knowledgeable observers, endorse.

It's conceivable that there are analog devices that cannot be simulated in real time by any digital computer.  An AI that could perceive the things we perceive or think the thoughts we think would have to include similar devices.  (Perhaps the economics just dictate using these devices instead of some complex digital workaround.)  I don't think this would be more than a hiccup on the way to the development of AI, but it is possible.  (But as I opined above, improbable.)  I seriously doubt, however, that we could make use of an analog device without understanding in principle how to replace it with a digital one.  That's because I doubt we could make use of an analog device without understanding, at a mathematical level, what it is doing for the brain.  That is, what sort of signal are the devices "listening" to it expecting?  If we can't answer this question correctly, we can include a faithful duplicate of a brain mechanism, but we won't know how to connect it  up.  I remain a computationalist even while acknowledging the importance of analog computation.

A computationalist must reject the idea that sensory transducers generate sensations, which are then processed by some entity above the level of the brain --- the self, perhaps.  This entity is often simply floating there, untethered by any explanaation.  Some people say, there's a way of explaining the self and what it perceives, but we'll never understand it.  These people are physicalists in name only.  Others (you perhaps?) believes that sensations can encapsulate physics-based computations that are just too ... magical?  overwhelming? adaptable?  to understand mathematically.

Computationalists reject any talk of sensations as having explanatory power.  The whole idea just seems like a strange way to understand what the brain is doing.  But it is our way, the built-in way.  That makes me believe we just have a belief system about the workings of our heads that is systematically wrong.  It's adaptable to have such a belief system, precisely because it draws a veil at a useful point.  I've explained these ideas (not very well) in a book I wrote in 2001, Mind and Mechanism.  I won't rehearse them here.

  -- Drew McDermott

I found your comment very interesting and hope that I will find the opportunity of reading your book soon. You have indeed a very different approach from mine, "you being a computationalist and all".

I believe that a computer, in a way, very much looks like a brain, and vice versa, for the simple reason that brains have invented computers, which makes the resemblance, in my eyes, a natural one. There is therefore nothing wrong with using computer analogies in trying to understand the brain, and using what we know of the brain to build more intelligent computers.
I would like therefore to avoid a sterile debate about what makes a brain, or a mind, different from computers, since we are, ultimately speaking of the same things without very often realizing it. I said a couple times that, as far as I am concerned, whatever a brain can do, a computer can do (better and faster). There is not even a reason to find why computers could not do Philosophy. It is just a matter of appropriate translation in machine instructions. 
There is though a fundamental difference that is more than an epiphenomenon. In that I agree with the author of the Chinese Room, even if I think that he didn't go far enough in his analysis. He stopped at the feeling or conviction that a mind is different because it feels, while a machine does not.
I am convinced that we can build machines, and therefore computers, because we feel. There would be no logic without that emotional dimension, because there would be no reason to follow the steps of a reasoning, except for mechanical necessities. My motto, as expressed somewhere else, is that rationality is a form of emotionality.
I know it sounds very much like Plato, with his identity of The True, The Good and The Beautiful (this reminds of Clint Eastwood somehow). I wouldn't go as far, but the fundamental idea is the same: you cannot consider rationality as an independent faculty or function.

Another idea I am currently working on, and which still needs much work, is the distinction we make between the brain and its organs. I have the impression that such a distinction prevents us from really understanding the brain. Eyes for instance, are often considered in textbooks as being part of the brain. I wonder if we should not generalize this approach to all organs and sensations.
I am not sure where I will end up, but my leitmotiv is this: sensations are not data that the brain has to work on, like computer data. I would say, without any argument to support such a view for the moment, that there are no computational processes that could work on these sensations. The only other things I see would be emotions and chemical processes.
This is tightly linked to another conviction of mine: the brain as a repository of memories, instead of as a CPU. There is no computer in the brain, even if the brain can function like a computer. That is why we have to be very careful in comparing them with one another.

Interesting thread. FWIW, I argue that the brain is a computational system, but neither digital nor analog: G. Piccinini and S. Bahar, “Neural Computation and the Computational Theory of Cognition," Cognitive Science 34 (2013), pp. 453–488.

I have perused very briefly the article mentioned. I cannot therefore judge of the quality of the arguments presented in comparison with other computationalist analyses. You will understand that I am not convinced of the soundness of computationalism in general, and can therefore not be entirely objective regarding your work. You might be interested by my views concerning neural processes, especially those dealing with neural codes and spikes in my thread The Brain: some problematic concepts . I look forward to your critical comments and thank you in advance.

Hachem--  I have perused your paper cursorily and I'm impressed with your ideas.  I've been sick, so have limited ability to focus until better, but I'll read and study it more later.   It is intricately associated with my doctoral dissertation (Latent Inhibition and Attachment... Tami Eldridge, Ph.D.  University of Montana--  You can find it on scholarworks on the web).   

In any case, I believe that inhibition is a function of adaptational degradation of the excitatory properties of stimuli, dependent upon the stability of the environment.  I do not believe it is an instrumental cognitive process, rather, more autonomic.  If, in a stable environment, one can habituate to that which has proven repetitive/reliable and can be predictive of future cause and effect, it facilitates attention and productive action in one's environment.  Consequently, the less changeable elements of the environment degrade in their salience (see the construct "latent inhibition"), so that the individual can focus on the figure(s), that is, novel stimuli that occur, and rely upon the stability of the backdrop, the ground.   This type of stable evolutionary environment is more conducive to focused learning about relevant stimuli, thus growth and productive adaptation.  

The unpredictability of the devolved social environment in which we have lived for some time, can create psychopathological-appearing behavioral symptoms, such as that observed in ADHD, when, given the unpredictability of the environment due to little cause and effect between one's behavior and the consequent response, one gets literally overloaded with excitatory (non-disinhibited) stimuli in the interest of survival.  One does not want to lose focus in an unstable environment upon that which a moment later may become relevant to one's adaptation and survival.  

Reply to Tami Williams
I wish you a speedy and complete recovery, and look forward to your comments. Concerning your other observations I am afraid I have no expertise in the matter.

I find your comment to my thoughtful and complementary words to be quite passive aggressive (but then I am a psychologist, but contrary to popular opinion --real healing psychologists don't spend their recreational or professional time analyzing others "off the clock"). You have no thoughts about my "other observations" contained in my post, yet 'look forward to [my] comments' (implying that the legitimate ones are yet to come).  My scholarly works make sense (they represent many years of careful study and spiritual challenge and development),  Yet, you who seem to write about a similar subject matter for whatever reason (insert a bit of sarcasm here), have "no expertise in the matter." I'm currently in agreement about that.  Are you mysogynistic or just an equal opportunity jerk?
I am sick because I am good and under attack spiritually, but my thoughts are clear as was my post.  I take exception to your condescension.  I have no further comment to you.  I gave you my best summary of my work, which I thought was somewhat related to yours.  

People who are hear to learn and communicate, not play games, I believe, comprehended my submission, or else the juror of this apparently very carefully moderated site in my couple months of experience would have rejected what I believe was an apt submission (or I wouldn't have wasted the time to carefully write it, and I wouldn't have bothered to be here, or try to submit.  I'm very discriminating about how I spend my precious time and energy, as well as how I represent my declared affiliated organizations). 

 Read my specs if you haven't.  I have lost my precious son, been called crazy.... And I feel that in a veiled manner you are diminishing my comments in a similar manner.  "Sorry you're sick, look forward to actual relevant comments to my work that make sense."  I'd swear here at you, but I have decorum and sites like this have standards, thank GOD.  

 Sincerely, Maria de Capernaum, Nazareth, now Missoula, MT.  Join Philosophy of Religion.  I'm there too. :-(   

Reply to Tami Williams
My wishes were well meant. Further, I am not a psychologist and could therefore not really contribute  much to the issue of inhibition and behavior. I analyze (psychological) theories from a philosophical and (epistemo)logical viewpoint. That does not make me a psychologist. I am currently working on physical theories like Relativity and Quantum Electromechanics. That does not make me a physicist either. I would therefore also refuse to discuss a physical statement unless I consider it philosophically relevant and I feel confident that I understand it well enough to say something meaningful about it..

Hachem--  I appreciate your tolerating what could be considered aggressive and reactive statements and going forth to provide a clarifying response.   That level of respectful, reciprocal communication doesn't always ensue where there is conflict in perspectives.  Thank you.  I have very many loving friends who are learned with whom I have sufficient relationship that we can "hash it out" with regard to points of disagreement without "parting ways," and I believe it is within that struggle that we all open-mindedly learn. 

I have other thoughts in reference to your response, but I need to review further your paper, the extent to which I felt it had to do with my dissertation topic, and I want to talk a bit about overlap in the Venn diagram of different disciplinary perspectives, respectful interchange within the same, and the extent to which I feel that therein lies our collaborative learning from different perspectives, all looking at and communicating about the same one world,  Therein, I believe, lies growth for all of us.  

More, later and again thanks.  I've been an attacked activist for a long time and I'm sometimes too defensive, especially on a particularly challenging day or moment.

All the best,

My response to your paper and its interface with my work will be a multi-stage process.   I have been reading your work, reviewing my own, taking notes and drawing up a model that is predominantly psychological-science oriented, but rooted in philosophy as is all good psychology.  Basically, philosophy just aims at conceptual clarity that fuels all good thought and science, including cognitive science.  I'm going to try to "pitch" this more simply than it is (in the works), to lay a foundation as to what I've been thinking.

First of all, I've been a psychotherapist for twenty-some years, predominantly with children and families, but also I've worked in agencies in rural Montana, so I've worked with every population.  I'm systems-oriented, because it's all that makes sense.  We dwell in a closed dynamic system, everything affects everything else, and when dysfunction occurs, symptomatic "qualia" (your philosophical vernacular for what I've translated to be an integriteied unit among integriteied larger qualia/ units that comprise cell collections, organs, organisms, families, communities...) emerge/ break through, i.e., rise above the radar to show themselves as aberrant.   It doesn't mean they are the problem, they just show the problem.  

In providing therapy and working on my own psychological health so that I can continue to be a productive member of life and do good work, I have utilized some simple concepts that are easy to convey (the K.I.S.S. Principle of science and life, "keep it simple stupid").  In so doing, I've often talked to myself and my daunted clients about how we all have "a pie diagram of energy", think of a circle, a pie diagram, depicting that you have a fixed amount of resources, and if you inordinately allocate them to one or several things, you will be lacking in another area, as is true with your monetary budget.  Note!:  I may run out of room soon as I'm a doctor OF philosophy, but not IN philosophy, thus not a PRO here.  This intro isn't as pithy as I'd planned, so I might have to follow up if this is posted, or rewrite in a more succinct format to introduce my thoughts.  Bye for now ;-).  Maria

Reply to Tami Williams
Hachem-- Here is Part II of the model I'm developing that I believe draws upon an overlap between your work and mine.

Yesterday, in summary, I described what I've long felt is a key component to understanding one's limitations in the interest of maintaining psychological, physical, spiritual, etc. health, that is, that we as individuals all have a fixed amount of energy [in cognitive science, processing resources] to allocate to adaptation to our environment.  When we either automatically (I like to use the term autonomic, as with brainstem-level survival processes), or instrumentally (more operant, with some level of conscious action or override) inordinately dedicate resources to one or more activities, we can become deficient in certain other areas, and even symptomatic of problems that exist in the dynamic system of our collective "qualia," all of which impact each other.  We'll call this construct, for now, "limited resources."

The second construct, or dimension of this model that I'd like to introduce in this post has to do with the inhibitory/excitatory continuum, which is intimately related to one's history, the current environmental conditions, and one's consequent anticipation as to what the future may hold.  This construct is associated with environmental stability over time, one's personal history (experience of significant discrete trauma(s), as well as low, moderate to severe predictable or unpredictable chronic stressors over the course of one's history).  

Third, related to the above is the extent to which one's psychological health is intimately related to existential presence, the degree to which one is encoding moments of his or her day through their various sensory processes vs. dissociating-- projecting to concerns about the past, or fears about the future.  Failure to be present existentially, sensorily, I have found in my work, is highly correlated with depression and other "so-called" psychopathology.  I'm running out of room, more next time.  Best, Maria

Reply to Tami Williams
I am afraid I see no link between your work and mine. You are developing a psychological theory. I do not, and cannot, missing the necessary expertise. What I normally do is, once again, analyze a theory philosophically and (epistemo)logically. Not all theories allow such an approach. It would make for instance very little sense for me to try and analyze Coulomb's Law, the way surfaces are measured in geometry, or how society's demands work on an individual's psyche. These are "technical" questions that need "technical" answers. In other words, you are presenting and analyzing psychological queries that need a psychological theory to deal with., which you are in the process of building.  I, myself, wouldn't know where to begin.

I will then, not address you specifically, rather the interdisciplinary audience in this forum if one exists here.  I have found receptivity to my psychological perspective in the communities that exist in other areas of PhilPapers and for some reason the juror here has allowed my posts.  And, after all, this forum is "Philosophy of Cognitive Science" for heaven's sake :-o.  Having attained a PhD and internship from medical caliber programs, I know well that philosophy is integral to all sound psychological science.  

 I would be interested in your criteria as to how you determine which theories are amenable to your "philosophical" analysis vs. those that are not.  Being a scientist, I look for conceptually clear criteria backing such assertions.  Otherwise, I feel that statements, such as yours above that perhaps cannot be substantiated with any protocol can shut down communication across persectives/disciplines, can thwart our open-mindedness, and thus impede the collaborative enhancement of our good works.  



Review of Model in Development, with some Elaboration (to forum, not specifically Hachem who has stated that he sees no relevance to his work.)
Part I "Pie Diagram of Energy"  Limited Resources (for each "qualia" - cells, collections of cells, organs, organism(s), families, communities- and it is a "village" in that everything affects everything else in a closed dynamic system)

Part II
A.  Excitatory-Inhibitory Continuum - Dependent upon Environmental Stability (Hx of Acute and/or Chronic Trauma and Stress)   [There exists a baseline level of arousal, changeable, dependent upon history, current situation, and anticipation of future contingencies, which impacts the required salience of specific stimuli to be perceived against the backdrop.]

B.  Existential/Sensory Presence - Key to Psychological Health

Part III
A.  Pragmatic vs. Esoteric  Both perspectives are important to applied science, however the thrust of the current model is ultimately pragmatic (applied), but still draws on purer, more abstract philosophical conceptualizations.  The goal of my work is knowledge to advance healing, physical, psychological and spiritual health.  This includes developing the conscious ability to override automatic/autonomic processes in the interest of adaptation.

B.  Figure vs. Ground The ability to selectively attend to a specific stimulus or stimuli in one's environment is intimately related to baseline level of arousal, which is contingent upon past and current environmental stability.  (I am out of space, expansion and clarification, next entry if posted.)

Dictionary definition of :  
The theory of knowledge, especially with regard to its methods, validity, and scope.  Epistemology is the investigation of what distinguishes justified belief from opinion.  

Given that I haven't heard further from you regarding criteria by which you determine that which is amenable to epistemological analysis vs. that which is not, I thought I take a stab at wondering. Again, I am a psychologist, not a philosopher per se, but have always had a philosophical inclination to my thought even if it is not informed by extensive philosophical learning and methodology beyond my psychology training.

I haven't returned to your paper, as it makes no sense to debate what seems to be some convergence between your area of study and mine when you see none.  However, you discuss 'inhibition' which is a psychological construct, not part of the vernacular of philosophy in my understanding.  

PhilPapers has subforums that subject science, religion, aesthetics, cognitive science, etc. to philosophical scrutiny (as in the above definition of epistemology).  If we are to, say, just examine the epistemological legitimacy of the construct of 'inhibition', as a pro philosopher, how would one go about that?  I just approach that which I don't know with problem-solving, logic...   Consequently, I would look at the definition of inhibition.  The various dictionary definitions on-line err in the direction of social inhibition "an inability to act naturally especially because of a lack of confidence."  Maybe, most pertinent to the current discussion is the sensory construct of lateral inhibition in which an excited neuron acts to reduce the impact of neighboring neurons laterally, creating a contrast in stimulation that facilitates sensory perception.  (Out of room, more next entry if posted ;-)). Maria

Reply to Tami Williams
I have no clear-cut answer as to what criteria I use for deciding that a theory is amenable to my mode of analysis, except this general one. A psychological theory can be judged by its predictive or explicative value in the field of psychology, however vague the definition of such a field might be. Such a judgment can only be done by what I would call "experts", or, in this case psychologists. Experts are of course not infallible, and non-experts are free to dig in whenever and wherever they wish. But theories go very often beyond simple predictions and explanations. They can express philosophical or ideological convictions that are not explicitly stated. A psychological theory can for instance be shown to rest on the implicit superiority of males, even if it is nowhere so stated. Such a theory would still be able to make valuable predictions and give rational explanations of many psychological processes. The critique of such a theory would then have to show that these predictions and explanations, for instance, sound rational because male superiority has remained unquestioned. That would be the case of many historical essays dating a few centuries, if not a few decades back.
At that moment, the critique is not building an alternative theory but showing the inherent limits of certain assumptions.
That is what I have tried to do in all my threads. Concerning inhibition, I am not trying to build a psychological theory, but to analyze the concept itself, especially its biological origin, by looking at the methods and eventually the implicit assumptions made by the scientists.
You, on the other hand, are in the process of building a psychological theory. I will be more than happy to analyze its philosophical, epistemological or ideological tenants, if I think that it would be interesting to others. Still, do not expect of me a judgment of its psychological value for the simple reason, once again, that I am not a psychologist. 

Ok, thanks.  I HEAR you :*).   I have a number of reactions to your post, but want to start with two issues that have been percolating in my mind in terms of our process here, as well as something that pertains specifically to the concept or construct of inhibition.  
You don't seem to disagree that the concept/construct of inhibition is amenable to philosophical analysis in the realm of cognitive science.  So wouldn't it follow that other semantic chunks of my or other theories, be they psychological, linguistic, geometric (who cares?) are amenable to the same kind of epistemological analysis, to determine their individual and collective legitimacy, say in terms of criteria I've been learning about in the philosophical literature like 'truth,' 'belief,' and 'justification'?

It would seem that for philosophy to travel down to the terra firma indisciplinarily from the esoteric to the applied, there would need to be courageous philosophical experts, such as yourself, who venture into all sorts of areas to apply your philosophical analysis to the components of theories, and ultimately arrive at your cumulative determination about the whole, even when it requires research into the vernacular of that discipline.  That is my process issue.

Secondly, in terms of the concept of inhibition, there is a mathematical truth about the concept of lateral inhibition, a sensory process that has some empirical justification.   Given limited resources in a moment of say auditory, or visual perception, if there is an excitatory event (by virtue of sudden onset--that is, the time factor-- salience, that is, illumination, novelty, hue...)  this causes neuronal excitation inspired by the figure (focus of attention), which will by necessity diminish resources that may be allocated to the ground (inhibition by virtue of the salience of the focal point). This is a logical and mathematically substantiable truth, which would be difficult to dispute.  Of course, one's level of response to the novel/salient/sudden figure, again will be dependent upon the current existential context (e.g., resting level of vigilance) which is dependent upon current circumstances, personal history, environmental stability and one's unconscious/autonomic anticipation of the adaptive consequences of salient stimuli and one's learned ability to consciously override this baseline level of arousal.  

Reply to Tami Williams
I am afraid I have no idea what you are talking about. I cannot recognize anything I have said about inhibition in your reaction. It would help if you pinpointed (parts of) the posts you are targeting, and stated more concretely what your objections or points of agreement are.

I have a second reaction to your most recent post, in spite of my not knowing if my first reaction will be published.  My pithy commentary is constrained by the limitations of space given that I don't have pro-philosopher status here, yet they are based on many years of studying in universities the concept of inhibition, which again is a psychological construct.

Do you really have actual credentials even in philosophy?  It appears from your posted biography that you do.  If you don't, I promise you, it is more difficult to achieve them than one would think and it shows in one's writing as well as in their tolerance for novel information.  It would appear you are of "pro status" yet in re-reviewing your writings, they are rambling (asystematic), reductionistic, anecdotal and you mix many psychological terms with philosophical nomenclature with no clear point(s) ultimately. 

I noted your reference to the 'well-substantiated fact' of male dominance a couple of posts ago, which I'm sure you just used by way of example or explanation. :-)  Perhaps the male superiority thing isn't working for you [insert sarcasm here].