Back    All discussions

2009-02-13
The 'Explanatory Gap'
I'm wondering what people/philosophers in the field think about the 'explanatory gap' today.

From what I understand, Levine (1983) believes the gap is one of a lack of knowledge which, one day we'll understand the relationship between qualia/consciousness and physical processes. Chalmers (2006)suggests there's always going to be a problem of an explanatory gap.

Tye (1999) thinks it's all an illusion and Gertler (2001) disagrees with this view. Harman concludes that there will always 'at least be an explanatory gap' and now there's claims that the theory of Higher-Order-Thoughts (HOTs) closes the gap.

Does the HOT theory stand up amongst all the other views, or are we still a long way off from fully understanding the relation between consciousness and physical processes?

2009-02-14
The 'Explanatory Gap'
Reply to Jamie Wallace
Well, like most questions in philosophy, there is no consensus.  Certainly, no specific proposal for closing the explanatory gap has attracted much support.  I think that the most common view by far is that there is a fairly deep explanatory gap, but there's a lot of disagreement about whether that situation is temporary or permanent, and about what follows from this.  My sense of the sociology is that philosophers divide into four (very roughly delineated) groups of roughly equal size:

(1) There's no explanatory gap, or one that's fairly easily closable.
(2) There's a deep explanatory gap for now, but we might someday close it.
(3) There's a permanent explanatory gap, but not an ontological gap (so materialism is true).
(4) There's a permanent explanatory gap, and a corresponding ontological gap (so materialism is false).

Maybe we should add polls to PhilPapers to see how many people class themselves in each group.


2009-02-16
The 'Explanatory Gap'
Reply to Jamie Wallace
The view of the explanatory gap depends upon whether a particular philosopher believes there is anything to be explained. Eliminativists would not really hold that there is any gap.

Those who believe in phenomenal consciousness will, by and large, believe in an explanatory gap.

Do HOT theories avoid the gap?  If you can explain how a higher order (process?) can instantiate phenomenal consciousness in a lower order process or in reverberation with some lower order process then you will have closed the explanatory gap.

My view on the explanatory gap is that we are not being sufficiently empirical in our approach to phenomenal consciousness.  It is not sufficient to say "phenomenal consciousness is what it is like to be conscious" we must also actually describe what it is like to be conscious.  See for instance: Time and conscious experience http://newempiricism.blogspot.com/2009/02/time-and-conscious-experience.html

2009-02-18
The 'Explanatory Gap'
Reply to Jamie Wallace
Thanks so far for the responses received. Would be good to see a few more views.

I just read in the paper yesterday that the area of the brain wich controls jealousy has been found.
According to the article it's the same part which detects real physical pain. It was a scientist called
 Hidehiko Takahashi who led the research. they also found the bit in the brain that makes you feel
delight/misfortune (schadenfreude).

So if this is true, then this would explain the mind-body problem, yes? Could it also be argued that that
if this is the case then the way in which the brain creates mental pain is the same for all people and
therefore as it's one of the same there is no difference and that one's own experience of how it feels to be
jealous for example would be same feeling for another person? Wouldn't this bridge the 'gap'?

Or would you take a reductive approach and say, ok - scientists have discovered where mental pain of
jealousy and delight/misfortune is found; but there's still many other feelings or 'what-it's likeness - they
haven't accounted for; there is still a gap between the mental and the physical?

2009-02-18
The 'Explanatory Gap'
Reply to Jamie Wallace
Hi Jamie,

The mind/body problem is pretty complicated and does exist. David Chalmers' The Conscious Mind. (New York: Oxford University Pres) is a good starting text.

A quick, one-sided intro that should rattle you is at: http://newempiricism.blogspot.com/2009/01/perceiving-perception-and-seeing-seeing.html

There is an online Wiki text at http://en.wikibooks.org/wiki/Consciousness_studies which is dedicated to this problem.

2009-02-18
The 'Explanatory Gap'
Reply to Jamie Wallace
Jamie, I should have kept the philosophy on the forum. Perhaps I could ask you some questions:

How does a set of action potentials (nerve impulses) become like the pain in your experience? 
Why are some action potentials correlated with "pain" in experience and others "blue"? (The action potentials themselves are almost identical).
At any instant how does an action potential in one nerve cell become associated with an action potential in another to make a pain in the thumb? How do these separated, almost identical flashes of electricity become the experience (especially when they do not flash synchronously)?
What is your experience like? Is it like flashes of identical pulses in a soggy mass of tissue at all?
So how does the mind relate to the body?


 

2009-03-19
The 'Explanatory Gap'
The notion of an explanatory 'gap' suggests that something about consciousness has already been explained satisfactorily. What is that, I wonder?

The notion of a gap is also potentially misleading since it implies that the method being used is OK but that it can't quite get us to the target - or can't just yet.  But the fact that something cannot be explained by a certain method may also mean that the method is completely inappropriate to the task.  Then we are not talking about a 'gap' but a kind of abyss - or, to change the metaphor, a philosophical dead end.

The term 'gap', in other words, betrays a certain philosophical complacency, does it not?

 

2009-03-20
The 'Explanatory Gap'
Well, I think there is a fifth view, which, I would say, is the most popular among philosophers (and I subscribe to it as well):

There's a permanent explanatory gap, but we will never know whether there is an ontological gap (so whether materialism is true or false cannot be established from epistemic gaps).

Even if it's not the most popular, it is the safest view to hold according to many people, as it only involves denying that the explanatory gap has bearing on ontology, without a corresponding belief that materialism is actually true. So it only says that materialism might be true in spite of a permanent explanatory gap.

2009-03-21
The 'Explanatory Gap'
I take it that 'ontological gap' means whether or not the 'materialist' method is appropriate to the task. (I prefer to put things in everyday terms and avoid jargon.)  If there is an 'explanatory gap', that surely means the materialist method may in fact be quite inappropriate to the task. To that extent, the 'gap' does have a bearing on 'ontology', does it not?

The term 'gap' is also very questionable, because it implies that something has already been explained successfully. Three questions arise: (1) What exactly has been explained? (Perhaps nothing at all?) (2) How could one judge 'success' in such a case given that one does not know what consciousness is (and definitions are all over the place) and one may in fact be heading in quite the wrong direction?  (3) How do we know that the 'gap' is not an abyss  - ie that what (we think) we know is infinitessimally small compared with what there is to understand? 

2009-03-23
The 'Explanatory Gap'
I also think that an approach to the explanatory gap should be more empirically committed, both in drawing on empirical findings and in making the claims empirically testable, or at least pointing out in which way they might be tested. By drawing on the empirical findings I mean changing a conception of phenomenal properties based on empirical findings, for example treating them as relational rather than intrinsic. By empirical testability I mean providing a framework of how a hypothesis might be tested. I'm not sure whether any kind of HOT could be neurobiologically corroborated. Without an empirical commitment any kind of a theory of consciousness would be a mere guessing. 
 On the other hand there are some attempts to study phenomenology empirically: Varela (Varela 1996), Aydede and Price (Aydede 2006), McGill questionnaire might be used for that as well, etc.






2009-04-05
The 'Explanatory Gap'
Reply to Jamie Wallace
I think the explanatory gap is a sign of an ontological gap. No intuitive reason has been offered, in my view, as to why we should find it so hard to make sense of consciousness being put in place by non-conscious processes, on the assumption that that really is the state of nature. There are some awkward attempts at explaining away the gap via features of our concepts, or a notion of our 'cognitive limitedness', but I don't think anyone would be satisfied if something like this was the final word on the mind/body problem. It remains intuitive, after all this time, that if the world is entirely constituted of concrete items of nature 'n', that everything in the world ought to be intelligible in 'n' terms, in principle. But physicalism as normally understood cannot, it seems, meet this requirement. (All credit to people like Frank Jackson, in my view, for at least recognising the theoretical debt physicalism owes, and trying to make good on it)

2009-04-07
The 'Explanatory Gap'
Reply to Jamie Wallace

THE FEELING/FUNCTING PROBLEM IS INSOLUBLE

Until someone can successfully answer the question "How and why do we feel (rather than just "funct")?" there is and will remain an "explanatory gap."

Attempts to close that gap invariably boil down to answers to the question of how we do and are able to do things (computationally, neurologically, evolutionariily) -- i.e., answers about how and why we "funct," rather than answers to the question of how and why we feel (or, to put it another way, how and why it (sometimes) feels like something to funct).

Hence all the attempted answers simply beg the question.

The reason I am pretty confident that the question, when not begged, will remain unanswerable -- except, of course, if dualism is true, and feeling turns out to be an independent causal force in the universe ("I did it because I felt like it"), which it isn't, and won't -- is simply a matter of causality: 

Either feeling is an independent causal force -- in which case it can play a causal role in functing -- or it isn't. 

It isn't. 

So there's no causal role left for feeling. It's superfluous. Yet it's there. 

Some of our functions are indeed felt functions. Indeed, it feels as if feeling is what life is all about. 

But there is no room for a causal account of how or why. 

Hence the mind gap. 

Stevan Harnad



2009-04-08
The 'Explanatory Gap'
Reply to Jamie Wallace
Just a word on "It remains intuitive, after all this time, that if the world is entirely constituted of concrete items of nature 'n', that everything in the world ought to be intelligible in 'n' terms,"

As a general comment, (which may not apply to Sam, so apologies in advance if it doesn't) I am frankly always surprised that analytic philosophy makes such heavy use of the idea of intuition. I so often hear from philosophers of this persuasion phrases like 'Intuitively, it just seems..." or "It seems intuitive that.."  Here we have a school of philosophy that lays so much stress on its claims to rigorous analysis etc, constantly appealing to intuition!  One might perhaps expect it from continental philosophers (who oddly enough seldom seem to use the term) but from analytic philosophers??  Similarly, I often hear arguments dismissed on the grounds that "That just seems weird" (or some similar term).  That, to my mind, is equally unacceptable.

On the main point in the above statement, I don't see any a priori reason why we should assume that the world is '"entirely constituted of concrete items of nature 'n'". It might surely be constituted of any number of kinds of items. And of course the word "constituted" poses problem anyway...

2009-04-09
The 'Explanatory Gap'
Reply to Derek Allan
Hi Derek. I certainly agree about the over-reliance on how things 'sound' to analytic philosophers, and am equally suspicious about the appeal to intuition, if that just means naked, unexamined intuition. But, as you anticipate, I don't think I'm in that camp. I might be guilty of talking the intuition-talk but I intend something more robust. Would you be happy with 'it stands to reason' or something like that?

As for your comment on the main point: I don't see any a priori reason either why we should assume that the actual world is a monistic one. But physicalists--among others--do think such a world actually obtains. Whatever the diverse natures of the concretes that exist, they are all physical, and whatever else they are supervenes on the basic physical nature. That, I take it, is common ground for physicalists. Well, I claim that given this ontology everything there concretely is ought to be intelligible in physical terms. And as I say, I think only rather awkward reasons have been given ('our phenomenal concepts are deceiving us!') to head off this conclusion and its ontological import (given the explanatory gap).

As for constitution...I'm not sure I'm really getting myself into trouble there--I think that relation would do--but swap it for something you prefer. Why not?

2009-04-10
The 'Explanatory Gap'
Reply to Sam Coleman
Hi Sam

Thank you for your reply.  I get the impression our views are in fact not far apart.

My problem with 'intuitive'  etc, just to be clear, is that, to my mind, arguments in philosophy, apart from minor, self-evident ones, should always be supported by reasons - which of course may not turn out under criticism to be good reasons but that is another matter.  'Intuitively', or locutions like 'That just seems weird', are not reasons in my view.  If I were king for a day I would ban them from all philosophical discussion forthwith (along with, in my own area of interest, the word 'aesthetic'). 

My problem with 'constituted' is probably dealt with by what you say in your second para - if I follow you properly. The word seems to imply - or could be taken to imply - that we can sensibly speak of the world as made up of some basic physical item (or items if we are not monists).  But the mind, or consciousness, (for example) may not in this sense exist at all - i.e. it may not be a 'constituent' at all - because it may not pass such a test (and the fact that it seems to disappear 'into thin air', so to speak, when we die is at least one reason for thinking so.)  So here we seem to be faced with something that at one and the same time, exists (because we seem to sense it does) and doesn't exist (because it will probably never pass any physical test for existence). This incidentally is why I am always annoyed by the phrase the 'ghost in the machine'.  It seems to imply that the only valid test of existence is a physical one and that any other form of existence must be supernatural - in the comic book sense of ghosts etc.



2009-04-11
The 'Explanatory Gap'
Reply to Jamie Wallace
Long way off. Tim Crane has written recently a paper (appeared in C.Penco,M.Beaney,M. Vignolo: Explaining the Mental, Cambridge Scholar Publishing, 2007) where he criticizes physicalists who claim that there is no need to give an explanation to the problem of the explanatory gap.  Tim Crane argue that physiscalists who claim that make it impossible to distinguish Physicalism from Emergentism. According to Emergentism mental properties are emergent properties which cannot be reduced to or explained by physical properties. Physicalists therefore have to provide some kind of explanation or have to say that the problem is irrelevant because soon or later we will arrive at a satisfactory explanation (which means that they have not got one).

2009-04-13
The 'Explanatory Gap'
Reply to Jamie Wallace
I think an alternative perspective on the explanatory gap is provided by the biological enactivism of Maturana and Varela.  It seems to fall out of their approach that the explanatory gap is between the animate and the inanimate (or between first order autopoeisis and merely contingent arrangements of matter).  Once the conditions for life are in place, a unity is formed that encounters the world from a fixed perspective. In other words, the essential conditions for subjectivity are available.  It might indeed be some kind of progress if we had only one explanatory gap (life/not-life) instead of two (life/not-life and mind/matter).  Furthermore, it allows a broader class of theorist to contribute to the debate, as the issue of the (non-)reduction of biology to physics becomes an area of common interest to philosophers of mind, biologists and physicists alike.

2009-04-14
The 'Explanatory Gap'
Reply to Jamie Wallace
Just re Carlo's comment: " Physicalists therefore have to provide some kind of explanation or have to say that the problem is irrelevant because soon or later we will arrive at a satisfactory explanation (which means that they have not got one)."

Yes, and I think the statement that "sooner or later we will arrive at a satisfactory explanation" implies something else as well. It implies that physicalism is really the correct approach (and thus we only have to wait for the answer to come). So it's a bit of a dodge: it's removes the necessity of scrutinising one's basic assumptions and methodology by deferring any such analysis to some indefinite time in the future. Behaviorism (Skinner et al) used to do exactly the same thing. Which is not surprising because the physicalist appoach is really only behaviorism transposed to a micro level.


 

2009-04-14
The 'Explanatory Gap'
Reply to Jamie Wallace
Carlo Penco wrote "Physicalists therefore have to provide some kind of explanation [of the explanatory gap] or have to say that the problem is irrelevant..."

Fred Cummins wrote, of the biological enactivism of Maturana and Varela, "Once the conditions for life are in place, a unity is formed that encounters the world from a fixed perspective. In other words, the essential conditions for subjectivity are available."

To explain the explanatory gap, in my view, we have to take subjectivity and objectivity seriously. Thomas Nagel made a good start, but didn't go far enough. For a long time I was strongly attracted to dual aspect theories, but my present position is better described as a dual perspective theory, where the perspectives are objective and inter/subjective, and these are explained in psychological terms. The concept of phenomenal consciousness belongs to the inter/subjective perspective, or mode of operation, if you like, in which we identify with others and reflect on ourselves, while science is necessarily, maximally objective. To juxtapose these is therefore to commit a category error. But it's important to recognise that the inter/subective perspective is as valid, in the appropriate contexts, as science. It is not only natural, but necessary and right, that we treat other people as if they were thinking, feeling, beings, like oneself. But that's a practical and moral obligation: there is no need to seek any objective justification for it. We can even go further, and adopt a sort of panpsychism, attributing consciousness (self-identifying) wherever that is possible and/or in any way beneficial, as long as we realise that this is an inter/subjective endeavour, not an objective one -- and there are many who would claim that there are spiritual benefits to viewing the universe as one conscious being, some of whose parts imagine that they are separate entities -- a claim that can be understood as having no supernatural implications whatsoever -- and "spiritual" can be understood as a subset of "psychological". It's just a matter of perspective.

2009-04-14
The 'Explanatory Gap'
Reply to Jamie Wallace
Alan: Physicalism is just micro-behaviourism, yes I like that idea a lot, there's much in that thought. What I don't like, however, is the appropriation of 'physical' that has occurred over the last century or so, and which you subscribe to in the way you set up the opposing worldview, that even totters on the verge of denying consciousness altogether. We don't just 'seem to sense' consciousness, for any 'seeming' and sensing that there is in fact constitutes consciousness, or cannot go on outside of it: there's nothing of which we're more certain. And I'm just re-hashing Russell when I say that consciousness is the aspect of the physical world that we know best. Any sense of 'physical' that renders this problematic is itself problematic. Hence the Gap, in my view.

(aside: I hope you'll get involved with the Phenomenal Qualities Project I'm running at Hertfordshire, starting next month for three years. The website is my homepage as listed here on Philpapers.)

Fred: Having read some Varela at the behest of a student of mine I have to say I was hopeful, but then disappointed. I just don't see how that embodied/enactive stuff helps at all with the explanatory gap, nor why one gets to say  that a being of the kind you describe comes packaged with a perspective on the world, let alone a phenomenal one, and that this is somehow made intelligible on the view. Moreover, it seems to me that the life/non-life gap and the explanatory gap are pretty well distinguished these days: in fact it's part of how hard the hard problem is that 'solving' the life problem--in so far as there is one--would seem to bear no conceptual connecton at all to the explanatory gap concerning consciousness. 

2009-04-15
The 'Explanatory Gap'
Reply to Jamie Wallace
Hi Sam

Glad you like the proposition that physicalism is micro-behaviorism. I have found in the past that some physicalists tend to wax a bit indignant at the suggestion. Perhaps they do not like the idea that their approach bears some resemblance to Skinner et al (which I can undertand!).

As to your comments on 'physical', I certainly would not want to adopt a position which, even by implication, denied consciousness.  What I would want to deny, though, is that the idea of consciousness is as easy to define as some of the discussion I see - especially in analytical philosophy - seems to assume. For example, the widespread tendency to discuss human consciousness merely in terms of perceptual faculties, such as sight etc, seems to me simplistic in the extreme. Human consciousness, in my view, is inseparable from what we call hopes, fears, joys, sorrows, a sense of the passing of time etc. Thoughts and emotions, if you like. Discussing it in terms such as "do I see the colour red?" etc is just tinkering with the problem in my view.

I would agree with your comment that "any 'seeming' and sensing that there is ...cannot go on outside of it" (I have my doubts about the bit I omitted in your comment).   But that, in my view, is one of the reasons why the idea is so elusive: we are trying to define something by making use of that very something to do so. I would on the other hand have severe doubts about Russell's view that "consciousness is the aspect of the physical world that we know best". I am not at all sure that consciousness is an "aspect of the physical world" and I am very sure that, personally speaking, it is one of the aspects of my experience that I know (in the sense of understand) the least. 

DA  

PS I shall certainly have a look at your Phenomenal Qualities Project.

2009-04-16
The 'Explanatory Gap'
Reply to Stevan Harnad

Hi Stevan,

I don't disagree with anything you've said however, suggesting there is no causal role left for feeling leaves us with a potential problem; there seems to be a reliable correlation between feeling and functing.

Feeling is no ordinary epiphenomena.  Compare to a shadow which reliably correlates to that which creates the shadow.  The outline of the shadow, the light source, and that which blocks the light are all objectively measurable and reliably correlate.

Since feeling is not objectively measurable, it is no ordinary epiphenomenon.  Feeling apparantly correlates with the function.  Escaping from pain correlates with the feeling of pain, yet if there is no objectively measurable correlation, why should we presume the qualia should reliably correlate?  We are left with the assumption that feeling pain 'shadows the functing', but there is nothing objectively measurable to suggest why this epiphenomena should correlate if it serves no purpose. 

2009-04-16
The 'Explanatory Gap'
Reply to Jamie Wallace
Does analytic philosophy in this area seriously classify feelings as "epiphenomena"? I shall remember that next time I feel furious, delighted, sad, hopeful, disappointed, nostalgic, vengeful, forgiving, excited, bored - in fact when I feel anything at all!  All just mere epiphenomena!  

2009-04-16
The 'Explanatory Gap'
Reply to Jamie Wallace
I don't think that's a majority view Derek. For my part I'm sure that if anything is causal, qualities of experience are. 

2009-04-16
The 'Explanatory Gap'
Reply to David Chalk

ONE EPIPHENOMENON (AND PROBLEM) IS ENOUGH: THE PROBLEM IS EXPLAINING THE CAUSAL STATUS OF FEELING

"...suggesting there is no causal role left for feeling leaves us with a potential problem..."

Indeed it does! And that problem is called the "mind/body problem" (or the "explanatory gap"). And the problem is actual, not potential. 

Explanation is causal explanation, and if there is no room for feeling as a cause in its own right (as opposed to just being a mysterious correlate of a functional cause), then there is no room for a causal explanation of feeling.

"...feeling and functing... [are] objectively measurable and reliably correlate..."

They do indeed correlate reliably; and the functional correlates of feeling are objectively measurable. Feeling itself, however, is not objectively measurable (but it is subjectively "measurable," and that's good enough). Measurability, though, is not the problem: Causality is.

"...Since feeling is not objectively measurable, it is no ordinary epiphenomenon..." 

Are there any "ordinary" epiphenomena (uncaused or noncausal phenomena)? It seems to me that feeling is the only epiphenomenon...

"...feeling pain 'shadows the functing', but there is nothing objectively measurable to suggest why this epiphenomena should correlate if it serves no purpose..."

You said it (yet again!). But repeating it does not solve the problem (which, to repeat, is causation, not "objective measurability").


2009-04-16
The 'Explanatory Gap'
Reply to Stevan Harnad
Is there a fundamental difference between our inability to provide a causal explanation for the sheer existence of consciousness and our inability to provide a causal explanation for the sheer existence of space-time?

2009-04-16
The 'Explanatory Gap'
Reply to Jamie Wallace
"Is there a fundamental difference between our inability to provide a causal explanation for the sheer existence of consciousness and our inability to provide a causal explanation for the sheer existence of space-time?"

Yes there is, a big one:

(1) The sheer existence of space-time (and of the four fundamental forces, and of the independent natural laws) are brute facts (until/unless superstring theory or some other unifier manages to trim them down a bit), but their causal powers are as real as causality ever gets.

(2) Feeling exists as surely as gravity does (in fact, for Cartesian reasons, even more surely), but there the resemblance ends, because feelings can have no causal power (unless telekinetic dualism is true, which all evidence suggests it is not). In other words, even though the only intuition we have about causality comes from feeling (i.e., what it feels like to do something -- to cause it to happen -- because I feel like it), that is an illusion, and the real cause is the functing with which feeling is inexplicably correlated.

Some background:

Harnad, S. (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1:164-167.  

_____ (2000)  Correlation vs. Causality: How/Why the Mind/Body Problem Is Hard. Journal of Consciousness Studies 7(4): 54-61. 

_____ (2001) No Easy Way Out. The Sciences 41(2) 36-42. 

_____ (2001) Harnad on Dennett on Chalmers on Consciousness: The Mind/Body Problem is the Feeling/Function Problem

_____ (2003) Can a Machine Be Conscious? How? Journal of Consciousness Studies 10(4-5): 69-75.

_____ (2005) What Is Consciousness? New York Review 52 (11)

_____ & Scherzer, P. (2007) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. In Proceedings of Proceedings of 2007 Fall Symposium on AI and Consciousness. Washington DC. 


2009-04-16
The 'Explanatory Gap'
Reply to Stevan Harnad
"Is there a fundamental difference between our inability to provide a causal explanation for the sheer existence of consciousness and our inability to provide a causal explanation for the sheer existence of space-time?"

"Yes there is, a big one:

(1) The sheer existence of space-time (and of the four fundamental forces, and of the independent natural laws) are brute facts (until/unless superstring theory or some other unifier manages to trim them down a bit), but their causal powers are as real as causality ever gets."

My question doesn't address the possible causal powers of space-time or the possible causal powers of consciousness. I simply note that we are unable to provide a causal explanation for the existence of space-time, just as we are unable to provide a causal explanation for the existence of consciousness. And I wonder if philosophers have proposed a fundamental difference between these two explanatory failures.

 


2009-04-16
The 'Explanatory Gap'
Reply to Jamie Wallace
Mr. Harnad says:  "Feeling itself, however, is not objectively measurable (but it is subjectively "measurable," and that's good enough)."

I cannot know my own feelings without knowing my bodily states, and these states are theoretically measurable by others.

There is something paradoxical about the view that feelings are only subjectively knowable and yet correlate with objectively knowable processes.  How can you know that your subjective "measurements" correlate to objective measurements, if the subjective knowledge were not linked to objective knowledge in a measurable way? 

No evidence for any such correlations could ever be produced.  I would thus avoid any talk of mysterious correlations between function and feeling. 

We're in the same philosophical boat, regardless of whether we are talking about internal perception (e.g., feelings such as anxiety, hope, despair, and love) or external perception (e.g., of the color red and the heat of the sun).

When we ask, "how could these brain states produce feelings?," our intuition tells us that feelings are too mysterious, too immaterial to be produced by brain states.

Imagine asking, "how does the process whereby light enters my eyes and activates certain neurological patterns in my brain produce color vision?"  The answer is, that process is color vision.  But the advocate of an explanatory gap will say, "no, no.  That is not what I mean.  I mean, how is the phenomenal quality of color vision produced?"  Well, what is that?  Is it the color itself?

Color is produced by the wavelengths of light as they affect our visual processors.  Must there be some other color?  Some "phenomenal color" above and beyond the colors that scientists talk about in terms of wavelengths of light and neurological processes?

I don't see any reason why there should be.  Rather, it seems to me that any talk of some other red is unwarranted and only invites confusion.  Phenomenal redness is redness, and phenomenal experience is experience.  If there are any sound arguments to the contrary, I have yet to see them.

Scientists already have a theoretical framework for talking about colors.  Feelings are not so easy to grasp, probably because feelings are internal perceptions, and not external.  It's not just that the brain remains somewhat of a scientific mystery; rather, it's that the language of feelings is much more abstract and less well-defined than the language of colors.  The language of colors has been more refined over the ages because the objects of perception are much more easily isolated by members of a language community.  The language of feelings is quite crude in comparison, though it has also undergone a noticable process of refinery.  Still, when we normally talk about anxiety or love, we are talking about rather poorly defined sets of perceptions; nothing as easily identifiable as "red" or "hot."

The point here is, we should first be clear about what we mean by "feelings" and "color vision" and other perceptual processes and entities.  This most basic level of definitions is, I think, where most of the confusion and disagreements originate.  For, if we do not assume that feelings and other phenomenal experiences are distinct from bodily processes, then the question, "how do specific bodily states produce or correlate with the feelings" is easily dismissable.  Bodily states are the feelings. And this is obvious once we closely consider what we are doing when we "measure" our feelings.

2009-04-17
The 'Explanatory Gap'
Reply to Jamie Wallace
I had a brief look at a couple of Stevan's references.  He writes in one: "Let us not mince words. The difference between something that is and is not conscious is that something's home in something that's conscious, something experiencing experiences, feeling feelings, perhaps even, though not necessarily, thinking thoughts."

One thing that's very noticeable in a lot of  analytic philosophy's discussion of consciousness, particularly at key points like this, is how often it resorts to metaphor.  (I think also, for example, of David Chalmer's description of a zombie as a creature in which it is "all dark inside"). Now I have nothing against metaphor. Quite the contrary. But it seems odd to find such a strong reliance on it in a school of philosophy that, as I gather, prides itself on its scientific rigour and exactitude.

In this case I should say, the suggestion that consciousness signifies that something is "at home" leaves me feeling very uneasy. How exactly would one knock at the door of consciousness? And then how would one know that whatever answered was consciousness?  (And - forgive the facetiousness - would one even knock if all was dark inside?)

I am also unhappy with the phrases like "experiencing experiences".  In a vague sort of way, we tend to link the idea of consciousness with the idea of experience anyway. So defining it in terms of the idea of experience does not seem to get us very far. Ditto for feelings and thoughts.

DA

2009-04-18
The 'Explanatory Gap'
Reply to Jamie Wallace

(1) "Dark inside" is certainly a metaphor (and not a very good one, because there is something it feels like to see dark, and a "zombie" is not supposed to feel anything at all -- like a stone: a better metaphor).

(2) No "rigour and exactitude" being claimed here (and I am not a philosopher). Just claiming that everyone knows what it means to feel something (anything), and that to be conscious is just that, no more, no less.

(3) No point "knocking on the door" of consciousness, because of the "other-minds problem": the epistemic flip-side of the ontic mind/body problem (and equally insoluble, for much the same reasons): either the walks/talks/quacks-like-a-duck ("mirror neuron," or Turing) criterion (based on correlation and similarity) is trustworthy, or you're out of luck.

(4) It is not "in a vague sort of way" that being conscious is linked to being able to feel (something, anything). They're the same thing. And "experiencing" is just another synonym (which I have renounced since that first paper, sticking with "feeling" alone, instead of a string of distracting and question-begging equivocations).

(5) Yes, "feeling feelings" sounds redundant, but in fact it's just what's left of the Cogito. It comes with the territory (of feeling). (So much the worse for "unconscious thoughts," by the way: as incoherent as unfelt feelings: One mind/body problem is enough, and Freud was an even less rigorous and exact philosopher than I...)


2009-04-18
The 'Explanatory Gap'
Reply to Jamie Wallace
Some of the philosophers who feature here may be able to talk the epiphenomenalism-talk, but I bet they don't, can't, live it. 

2009-04-18
The 'Explanatory Gap'
Reply to Jamie Wallace

WHAT CAUSES FEELING VS. WHAT FEELING CAUSES

JS: "I cannot know my own feelings without knowing my bodily states, and these states are theoretically measurable by others."

I can't know I have a toothache without "knowing my bodily states"??

It seems to me I can know perfectly well (and cartesianly, hence incorrigibly) that I have a toothache, regardless of whether I have a tooth, or even a mouth, let alone whether anyone else is measuring or can measure anything, on my body or anywhere else, and whether that measurement does or does not correlate with the existence or locus of my tooth (or mouth) or pain.

And the only "bodily states" I know are the ones I feel, like the toothache. 

I can also feel what it feels like to look at a nocimeter in my tooth or brain that measures and indicates that I am feeling a moderate toothache, when I'm indeed feeling a moderate toothache. That correlation is "close enough for government (scientific) work" as well as for common sense. The clear and present danger of skepticism is not the problem; it's the clear absence of the possibility of causal explanation: Why is my toothache felt (rather than just my tooth-damage just functed)?

And the problem is not really with what causes feeling, as it is with what feeling causes: nothing (even though it feels like it does). That's the "explanatory gap." 

(The correlation between feeling and brain function is close enough so I lose no sleep about whether brain function indeed causes/constitutes feeling, somehow. Of course it does. The lesser problem is with the how; the greater problem is with the why: what causal role does it play that some functions are felt and others just functed? Because the answer looks to be a clear and present: none. -- though it sure doesn't feel that way...)

JS: "How can you know that your subjective "measurements" correlate to objective measurements, if the subjective knowledge were not linked to objective knowledge in a measurable way?"

A skeptic can't know that, any more than he can know that he has a body at all, or that there is a world out there. 

But let's (respectfully) doff our skeptical hats, because the mind/body problem's a lot worse than that. 

(So far, this is just the other-minds/other-bodies problem. That's just an epistemic problem, whereas the explanatory gap's ontic.) 

The real problem is with the (nonexistent) causal role of feeling (even after we've shrugged off the lesser problem of being unable to explain quite how the brain manages to cause/constitute feeling). 

All causal/functional questions are fully answerable without the slightest allusion to the fact that some functions happen to be felt functions: so the question is: why are they felt functions, rather than just functed functions? 

The answer is a resounding silence, because "why" is a causal question too (not just "how"); and there's no room for any causal answer.

Hence the mind-gap.

JS: "When we ask, "how could these brain states produce feelings?," our intuition tells us that feelings are too mysterious, too immaterial to be produced by brain states."

Nothing of the sort; and no appeal to intuition at all. 

I ask a simple, causal question. "Why are some functions felt?" 

And I encounter either silence or a lot of incoherent hand-waving by way of reply.

JS: "Imagine asking, "how does the process whereby light enters my eyes and activates certain neurological patterns in my brain produce color vision?"  The answer is, that process is color vision."

But why does it feel like something to see color? Why is chromoception not just functed optikinetics, as in the case of an optic sensor in a bank? 

(Beware of trying to reply with a complicated functional story here, because the punchline will always be: "Yes, but why is any of that functing felt functing, rather than just functed functing? What causal role does the feeling play?)

JS: "But the advocate of an explanatory gap will say, "no, no.  That is not what I mean.  I mean, how is the phenomenal quality of color vision produced?"  Well, what is that?  Is it the color itself?"

No, no. That is not what I mean. I mean, why does it feel like something to see? 

(Never mind color in particular; it's superfluous. We could do it all in black and white, or just one JND of grayness, or just intensity, in any sensorimotor -- i.e. felt -- modality, from what it feels like to hear a faint sound to what it feels like to be in a blue funk.)

Forget about the supernumerary and superfluous terminology -- "qualia" "phenomenal quality," etc. etc. Just explain how/why some functions are felt.

JS: "Scientists already have a theoretical framework for talking about colors.  Feelings are not so easy to grasp, probably because feelings are internal perceptions, and not external."

It's exactly the same problem (and I really mean exactly) when you are asking about how/why seeing blue feels like something or you are asking about why/how going into a blue funk feels like something. 

(The advantage of focusing on affective feelings rather than sensorimotor feelings is that with affects you are less distracted by the external referent: With feeling a toothache, there's that extra distraction about whether or not there is something going on in your tooth. With feeling sad, there's less scope for changing the subject and begging the question -- though of course there is always the correlated functing in the brain's affective system...)

JS: "if we do not assume that feelings and other phenomenal experiences are distinct from bodily processes, then the question, "how do specific bodily states produce or correlate with the feelings" is easily dismissable.  Bodily states are the feelings."

Hardly. Even if we finesse the lesser unsolved problem of explaining how some functings manage to be felt functings, we are still left with the greater insoluble problem of explaining why. And that (yet again) is our old friend, the explanatory gap. The "hard" problem...


2009-04-19
The 'Explanatory Gap'
Reply to Jamie Wallace
(1) But 'like a stone' would not, I imagine, satisfy David Chalmers. The zombie is supposed to carry on in a normal human way (whatever that means...) but to be lacking consciousness.  It is hard to imagine a stone carrying on in any way at all. (This is a criticism of the zombie idea not your suggestion.)
 
(2) Sorry, this was meant as a general comment on analytic philosophy, not a personal one.  (It's the kind of thing that is often said to contrast it with the continental variety).

(4)  Re your comment, "It is not 'in a vague sort of way' that being conscious is linked to being able to feel (something, anything). They're the same thing." 

I think there are problems here. Does a worm '"feel"?  Probably yes, in some sense - though in a sense almost certainly incomprehensible to us. Is a worm "conscious" then?  If not why not? etc, etc 

I think this kind of point highlights one of the key weaknesses of "'analytic" debates around the notion of consciousness: the fact that the notion is so seldom - if ever - carefully defined.  There is an apparent assumption that we "just know" what we mean by it.  Personally, I think the idea is extremely elusive and that we scarcely ever have any clear idea what we mean by it.  That's why talk about zombies as beings minus consciousness seems so futile to me. Minus what exactly?  The medieval schoolmen were noted for, among other things, appearing to believe that to name something was to make it real and comprehensible.  Analytic philosophy's discussions of consciousness often strike me as scholastic in this sense: they seem to assume that the idea is clear just because it has a name (and, of course, an imposing one too - "consciousness", after all, sounds sort of impressive.)

DA



 

2009-04-19
The 'Explanatory Gap'
Reply to Jamie Wallace
Apologies: "JS" in my earlier response should have read "JW" for Jeremy Wallace. (I don't know how to correct it retrospectively.)

2009-04-19
The 'Explanatory Gap'
Reply to Derek Allan

FEELING, FUNCTING, AND ALAN TURING

DA: "'like a stone' would not... satisfy David Chalmers. The zombie is supposed to carry on in a normal human way... but to be lacking consciousness.  It is hard to imagine a stone carrying on in any way at all."

What we are talking about is the presence or absence of the capacity to feel. A stone cannot feel. There's lots of other things that are true of a stone too: A stone can't do anything either (except fall when dropped, or just lay there wherever it is). But the relevant thing is that it doesn't feel

Now I have no idea whether or not there can be zombies (and David Chalmers has no idea either).

But I can give you one important example of what a zombie would be, if there could be zombies: A robot that can pass the Turing Test: act and talk in the world, indistinguishably (in what it does) from any of us, for a lifetime -- but without feeling anything at all whilst doing it all (just like a stone).

The reason this example is particularly instructive is that it brings out the fact that although lifelong performance capacity that is Turing-Indistinguishable from our own is certainly no guarantor of consciousness (feeling), it is the best we can hope for, and the closest we can ever hope to an explanation of feeling (which is not very close: it just explains the functing with which feeling is apparently correlated). The rest is down to whether or not there can be Turing-scale performance capacity (functing) without feeling. (I think there cannot be, but I certainly cannot prove it; I can't even explain how or why, because no one can explain how or why any function is a felt function, even though felt functions clearly exist -- in us, and other organisms.)

DA: "Does a worm '"feel"?  Probably yes... though in a sense almost certainly incomprehensible to us. Is a worm "conscious" then?  If not why not? etc, etc"  

Probably yes, a worm can feel (no scare-quotes needed), which means exactly the same thing as that the worm is conscious. 

(We can't be sure about anyone/anything else either, because of the other-minds problem, but a worm is almost as good a bet as another person.)

Whether or not the worm feels what I feel, whether or not I can understand what it feels like to be a worm, and indeed what and how much a worm feels is all completely irrelevant. The only thing that matters is whether the worm feels anything at all. If it does, it's conscious (because that's what it means to be conscious), and the fact that it feels is as utterly inexplicable as the fact that I feel.

DA: "consciousness... is so seldom - if ever - carefully defined.  There is an apparent assumption that we "just know" what we mean by it." 

Consciousness does not need to be "defined": it just needs to be pointed to. (That's sometimes called an "ostensive definition".) Something is conscious if it feels. And "feels" does not need to be defined either. Anyone who can speak already understands what it means to feel (with the possible exception of the Turing-Indistinguishable robot, if there can be zombies!). The meaning of our elementary words -- see, hear, touch, smell, taste -- are all grounded in our shared sensorimotor capacity to feel.

DA: "talk about zombies as beings minus consciousness seems so futile... Minus what exactly?"  

Minus feeling (like a stone, if, that is, there can be zombies -- i.e., entities that have our doing capacities but without feeling -- at all).

SH




2009-04-19
The 'Explanatory Gap'
Reply to Stevan Harnad
Apologies: "JS" in my earlier response should have read "JW" for Jeremy Wallace. (I don't know how to correct it retrospectively. I also don't know how to delete this duplicate posting, posted inadvertently....)

2009-04-19
The 'Explanatory Gap'
Reply to Stevan Harnad
RE: "But I can give you one important example of what a zombie would be, if there could be zombies: A robot that can pass the Turing Test: act and talk in the world, indistinguishably (in what it does) from any of us, for a lifetime -- but without feeling anything at all whilst doing it all (just like a stone)."

But this seems self-contradictory.  How could a being (assuming it is one) be "indistinguishable" from any of us yet not feel - if what we do includes feeling?  'Well' one might answer, 'we can't see "inside" it, so it might be doing all these "human" things (!) but not feeling.'  But that only begs the question of if/how we see 'Inside' anyone. (The word 'indistinguishable' is a trap here, I think. In effect we are assuming what has to be proved - that something can be  'indistinguishable'  from a human yet not be one. But how do we make sense of that idea?)

RE: 'Probably yes, a worm can feel (no scare-quotes needed), which means exactly the same thing as that the worm is conscious.'

Well, if this is the definition of consciousness, do we humans 'feel' in the same way?  If so, is human consciousness the same as worm consciousness (or, say, amoeba consciousness)? If not, there must be two (or more) forms of consciousness. How do they differ? And are we really entitled to call them all by the same name (ie consciousness)? Surely, only if we knew that all creatures great and small 'feel' the same way we 'feel' - which of course we can never know. 

RE: 'Consciousness does not need to be "defined": it just needs to be pointed to.'

But how does one point to something if one doesn't know what it is?  Could I point to a bird if I didn't know what a bird was?  I might in fact be pointing to the squirrel on the branch beside it. And replying, 'Well, we "just know" what consciousness is' is to my mind another trap. We tend to think we "just know", but when we try to state what we "just know" there is endless debate and confusion. For the very good reason that the idea is in fact not easily knowable at all but extremely elusive. 

DA
 







2009-04-19
The 'Explanatory Gap'
Reply to Jamie Wallace
But we don't need to know what the idea of consciousness is. We just need to know what consciousness is. And we each of us know enough to ostend it. Not, granted, in its entirety, perhaps. But certainly in respect of its feeling like something. And that's all we need for the explanatory gap discussion to get underway. 

2009-04-19
The 'Explanatory Gap'
Reply to Derek Allan

SENTIO ERGO SENTITUR

DA: "How could a being... be "indistinguishable" from any of us yet not feel - if what we do includes feeling?"  

I was referring explicitly to Turing Indistinguishability, which means objective indistinguishability from a conscious person, to a conscious person. (The Turing Test boils down to performance indistinguishability, but it could in principle be scaled all the way up to empirical indistinguishability. This is still just an epistemic test (hence vulnerable to the other-minds problem); it is not a metaphysical identity condition. Please let us not begin a debate about the "identity of indiscernibles"! That will just leave the explanatory-gap question far behind, begging it, by conflating the epistemic and the ontic...)

DA: "do we humans 'feel' in the same way?... is human consciousness the same as worm consciousness?" 

I can only repeat: This is not about what is being felt, but about whether anything is being felt at all.

DA: "how does one point to something if one doesn't know what it is?  Could I point to a bird if I didn't know what a bird was?"

We all know what it feels like to feel. We are not pointing to an (empirically risky) external object but to what it feels like to feel: a cartesian certainty all feeling functors share (if there exist any other feeling functors than me!). 

SH




2009-04-19
The 'Explanatory Gap'
Reply to Sam Coleman
RE: "But we don't need to know what the idea of consciousness is. We just need to know what consciousness is"

There is no material difference here.  By the 'idea' or 'notion' of consciousness, I mean consciousness (ie what it means).

And we would need to know it in its entirety.  If we didn't, how would we even know what was 'part' of it?   What (we thought) we knew as 'part', may, for all we knew, be an infinitesimally small and insignificant part, or maybe not even a part at all.  If one had never seen a cat, could one say that some limb one found lying around was part of a cat?

The point about 'ostending' (pointing to?) seems to me, as I said in my reply to Stevan, a red herring.  How does one point to something if one doesn't know what it is?  And to say it is 'feeling like something' raises all the problems about the notion of 'feeling' in this context. In addition, consciousness cannot surely 'feel like' anything - except itself (which other experience, precisely, could we say it 'feels like'?) And saying it feels like itself of course gets us nowhere.

It seems to me that unless one is exacting about these matters, the whole discussion just drifts off into endless confusion. Which is perhaps why there is so very little sign of any consensus in analytic philosophy about what consciousness is. Moreover, analytic philosophy, I would have thought, has a special duty to be precise and exacting, since these qualities are a large part of what it hangs its reputation on.

DA 

2009-04-19
The 'Explanatory Gap'
Reply to Stevan Harnad
RE:  "I was referring explicitly to Turing Indistinguishability, which means objective indistinguishability from a conscious person, to a conscious person...etc"

I'm sorry, you are losing me in jargon. I prefer to stick with plain words. Whether one calls it ' objective indistinguishability' or not, the point remains that to say that a zombie would be indistinguishable from a human but not be able to 'feel' is (leaving aside the ambiguity of the notion of 'feeling') surely self contradictory. A human necessarily 'feels' or it is not a human. So this 'zombie' would presumably have to be dead - or maybe in a coma. Which is not, I think, what Chalmers has in mind.

I can't help thinking that a lot of this zombie talk is partly influenced by Hollywood science fiction. Hollywood zombies look and behave exactly like humans (they sometimes have a sort of glazed look, I seem to recall!) but we are told somewhere along the line that they are 'really zombies'. So they are found to lack feelings of compassion, kill people at will etc etc. But Hollywood script writers are scarcely reliable philosophers.

RE: "I can only repeat: This is not about what is being felt, but about whether anything is being felt at all."

But exactly the same problem arises: what do we mean by 'feel'? Does a worm 'feel' the same way as we 'feel'?  We have absolutely no way of knowing.

RE: "We all know what it feels like to feel."

I don't. Feeling to me 'feels like' feeling (what else could it feel like?). And that of course tells us nothing at all.

DA  
 

2009-04-19
The 'Explanatory Gap'
Reply to Derek Allan
WHAT IT FEELS LIKE TO FEEL
DA: "to say that a zombie would be indistinguishable from a human but not be able to 'feel' is... surely self contradictory"

Not self-contradictory in the least! But I was referring to a Turing-Test-passing robot, not a "zombie" (about which I am skeptical).

For a robot to pass the Turing Test it has to be able to behave (for a lifetime) in a way that is indistinguishable from a human, to a human. (Humans are very good mind-readers, but they are all subject to the other-minds problem).

All I said about zombies was (1) that I have no idea whether they are possible (but, if not, I have even less idea about how/why not), (2) that an unfeeling robot that successfully passed the Turing Test would indeed be a zombie, and (3) that I doubt that a robot that could successfully pass the Turing Test would be unfeeling -- but no one can or will ever know for sure (except perhaps the robot).

DA: "I don't [know what it feels like to feel]. Feeling to me 'feels like' feeling" 

This is not a point that can be debated further. But I do suggest that you ask a colleague to pinch you. That's an example of what it feels like to feel. And the very same is true for everything else you experience in your waking world: everything you see, hear, taste, smell, touch, if your senses are normal and intact. That's what it feels like to see, hear, taste, smell, touch, etc. 

None of the specific qualitative details matter in the least for the mind/body problem or the explanatory gap: If/when you feel anything at all, whatever it happens to feel like, then you feel (then). And that entails the full weight of the mind/body problem (and the full vacancy in the "explanatory gap" -- a gap in the scope of causal explanation.)

SH




2009-04-20
The 'Explanatory Gap'
Reply to Stevan Harnad
RE: "This is not a point that can be debated further. But I do suggest that you ask a colleague to pinch you. That's an example of what it feels like to feel."

No, that would tell me what it feels like to be pinched. And it would only tell me that being pinched feels like being pinched. Not very informative.

There seems to be quite a lot of talk in this area of analytic philosophy about 'feels like' . (What it feels like to be a bat etc)  But it's very loose talk surely. To say 'we know what it feels like to feel', for example, strikes me as a very unhelpful proposition. To feel feels like nothing - except to feel. As I said in my last, what else could it feel like? So for the purposes of analysis or explanation we are no further advanced.

I think the superficial attractiveness of statements like 'we know what it feels like to feel' possibly comes from our tendency, when presented with this proposition, to think 'True. We are not dead, inanimate objects like stones. We respond to things.'  But apart from telling us very little (how informative is it to know we are not like stones?), this is really an invalid step. It's like comparing our living state with our dead state when in fact we have no idea what it is like to be dead. So strictly speaking we are comparing incomparables.

DA

2009-04-20
The 'Explanatory Gap'
Reply to Derek Allan
DA: "To feel feels like nothing - except to feel"

That's good enough, and that was all I was looking for all along. 

You had said earlier: "how does one point to something if one doesn't know what it is?"

Well now you've confirmed that you do know what it's like to feel. So it was enough to just point to it after all.

It's the presence or absence of that (in stones, worms, people, robots, zombies) that we're talking about. Explaining the how and why of being able to do that is the mind/body problem.

And the inability to explain the existence and especially the causal role of that is the explanatory gap.

-- SH


2009-04-20
The 'Explanatory Gap'
Reply to Stevan Harnad
Reply to Stevan:

But it is simply a tautology; it tells us nothing at all.

An analogy: If I put my hand inside a box, feel around inside, and say to someone 'The lining feels like velvet', they now know something about the lining (eg it's not like sandpaper, silk, cardboard, etc). But if I just tell them 'It feels like feeling something', that tells them nothing at all.  So no, I do not know what it is 'like' to feel. And I do not even know how to 'point' at it.

I think analytic philosophy is deluding itself with all this 'feeling like' talk. It's anything but 'analytic'.

DA

2009-04-20
The 'Explanatory Gap'
Reply to Derek Allan
ACHILLES AND THE TORTOISE ON FEELING
DA: "But it is simply a tautology; it tells us nothing at all."
Derek, I am afraid you are systematically missing the point. 

It is not a tautology that some things (like people, and probably worms) feel, and that others (like stones, computers, and today's robots) don't. 

You said you didn't know what it meant to feel. You asked for a "definition" (of consciousness, which i said was exactly the same thing as feeling).

I said everyone who feels knows what it means to feel, because everyone knows what it feels like to feel, and I tried to point to it ("ostensive definition").

You first said one could not point to what it meant: that you didn't know the difference between feeling and not feeling. 

I suggested a pinch.

Then you said you do know what it means after all, but that "what it feels like to feel" is tautological. 

Meanwhile you keep missing the substantive point at issue: that feeling is something that can either be present or absent, and that that is what the mind/body problem and its "explanatory gap" (about which this discussion was launched) are all about. Not about "analytical philosophy," but about how and why some things feel (or, alternatively, how and why some functions are felt, rather than merely being "functed"). 

In discourse, one can always affect not to understand, and that effectively makes it impossible to make any progress. It becomes the dialogue of Achilles and the Tortoise


-- SH


2009-04-20
The 'Explanatory Gap'
Reply to Stevan Harnad
Re: "It is not a tautology that some things (like people, and probably worms) feel, and that others (like stones, computers, and today's robots) don't."

No. But this is not what I said the tautology was. It is a tautology to say (eg) that "We all know what it feels like to feel." 

Re: "Meanwhile you keep missing the substantive point at issue: that feeling is something that can either be present or absent,"

Interesting point because it helps highlight the problem I have been getting at. If, as you claim, we all know 'what it feels like to feel', then presumably we would all know what it feels like not to feel? (E.g. it would make no sense to say we know what it feels like to be warm if we did not know what it feels like to be cool/cold - the word 'warm' would make no sense to us.)  Now, I have no idea what it is like not to feel (I don't mean in a localised sense like a local anaesthetic but in the global sense the word 'feel' seems to be intended in these discussions). In fact the only humans who, I imagine, 'know' what it like not to feel are dead humans - and perhaps those in a deep coma. And they cannot tell us - or make the required comparison. In other words, 'feel' in the sense the term is being used in the present context has no more meaning than the word 'warm' would in a world in which there is no cold.

DA  

2009-04-20
The 'Explanatory Gap'
Reply to Derek Allan

UNCOMPLEMENTED CATEGORIES, OR, WHAT IS IT LIKE TO BE A BACHELOR?

DA: "If, as you claim, we all know 'what it feels like to feel', then presumably we would all know what it feels like not to feel?"

Harnad, S. (1987) Uncomplemented Categories, or, What is it Like to be a Bachelor? 1987 Presidential Address: Society for Philosophy and Psychology

ABSTRACT: To learn and to use a category one must be able to sample both what is in it and what is not in it (i.e., what is in its complement), in order to pick out which invariant features distinguish members from nonmembers. Categories without complements may be responsible for certain conceptual and philosophical problems. Examples are experiential categories such as what it feels like to "be awake," "be alive," be aware," and "be." Providing a complement by analogy or extrapolation is a solution in some cases (such as what it feels like to be a bachelor), but only because the complement can in princible be sampled in the future, and because the analogy could in principle be correct. Where the complement is empty in principle, the "category" is intrinsically problematic. Other examples may include self-denial paradoxes (such as "this sentence is false") and problems with the predicate "exists."


2009-04-20
The 'Explanatory Gap'
Reply to Jamie Wallace
'It feels like something' is hardly uninformative. In fact, given that the phrase refers to some absolutely specific qualitative character, it's no less contentful (it couldn't be) than 'it feels like velvet'. The problem you're highlighting, Derek, is one of capturing the target phenomenon in language, which by everyone's lights is hard. But to take that as far as you do is verging on some radical positivism or behaviousrism. Ironically, one of the worst excesses of so-called analytic philosophy.

2009-04-21
The 'Explanatory Gap'
Reply to Sam Coleman
Actually, my analogy suggested that the person would say 'It feels like feeling something' (not 'it feels like something').  But even the latter is surely far less 'contentful' than 'It feels like velvet'. The person one is informing presumably knows already that the interior of the box will feel like something.  Analogies have their limits and I don't want to push this one too far. My point simply was that statements like 'we all know what it is like to feel' are about as useful and informative as the statement in my analogy.

I worry about your second point, Sam. Are you suggesting that one can know something about consciousness without being able to state it?  That seems a possible position for a mystic. But for a philosopher? (and a fortiori an analytic philosopher?)  And I don't see any similarity at all between my position and behaviourism.  I am simply asking for clarity. Behaviourists have a very different agenda (and in my experience were not notably clear.)  I'm not sure exactly what you mean by 'radical positivism' but if you mean some kind of scientism then that is a world away from my arguments.  Perhaps you could clarify?

DA
 

 

2009-04-21
The 'Explanatory Gap'
Reply to Jamie Wallace
I was being a bit cheeky I admit. But I also don't grasp the problem you're raising, not really. Yes consciousness is hard to get an analytical grip on. We've made some progress (e.g. Ned Block distinguished access consciousness from phenomenal consciousness, some think intentionality is another distinct aspect) but there's surely more to make. But the point is, the idea of being sensate, as opposed to not being so (by the way i don't see how it could follow that we should know what it's like not to feel anything: that's a contradiction) seems clear enough to us (the sensate) for us to do some philosophising. It's unclear why that should lead to consensus either, or why the lack of consensus amongst analytics should count against the approach. And nor do I think we're all stuck in so much of a rut as you seem to think. I'm a bit more bushy tailed about the whole business.

2009-04-21
The 'Explanatory Gap'
Reply to Sam Coleman
Re: "Block distinguished access consciousness from phenomenal consciousness.".  Is there an easily digestible article by Block or someone on this? (This topic is just a sideline for me - albeit an interesting one - so I don't have time for anything too lengthy or 'technical'.)

Re: "by the way i don't see how it could follow that we should know what it's like not to feel anything".  What I said was that we can't know what it is like not to feel anything (because we would need to be dead). So, saying we know what it is to feel, is like saying we know what it is to be warm when we have no idea of cold. 

RE: "It's unclear why ...the lack of consensus amongst analytics should count against the approach." I don't think it 'counts against' it exactly. But it does suggest that propositions like "We all know what it is like to feel/think etc" are very questionable. If we all really did know, one would expect much more unanimity about the conclusions to be drawn.  But my (sketchy) reading in this field suggests to me that in fact there are large areas of disagreement - and often on quite fundamental issues. (I found an article by Fodor quite interesting from this point of view.)

DA







2009-04-21
The 'Explanatory Gap'
Reply to Jamie Wallace
"ACCESS" CONSCIOUSNESS - "PHENOMENAL" CONSCIOUSNESS = ZERO

There is no difference whatsoever between "access consciousness" and "phenomenal consciousness." The distinction is purely notional, and a particularly striking example of how, when faced with a problem that we are completely incapable of solving, we love to proliferate both synonyms and pseudo-distinctions that give us the illusion either of having made some sort of progress or of at least dividing to conquer. 

Here is a (nonexhaustive) list of these specious sememes. (You are encouraged to add the ones I've missed):
consciousness, awareness, qualia, subjective states, conscious states, mental states, phenomenal states, qualitative states, intentional states, intentionality, subjectivity, mentality, private states, 1st-person states, contentful states, reflexive states, representational states, sentient states, experiential states, reflexivity, self-awareness, self-consciousness, sentience, raw feels, experience, soul, spirit, mind... 
My suggestion: spare yourself this self-deception and call a spade a spade. All of the above are covered by one simple, self-explanatory anglo-saxon term: feeling.
 
(Its verbal ("to feel") and adjectival ("felt") forms will be handy too, if ever you feel the urge to go profligate again. Feel free to speak of "feelers" and "non-feelers" too, if you must, and all the other "nons" and "uns" that come with the anglo-saxon territory. But don't get too excited: they won't help.) 

The mind/body problem is simply the problem of explaining how and why it is that some functional (i.e., physical, mechanical, dynamic, causal) states are felt states, rather than merely "functed" states. 

Till someone comes up with an explanation, 'that is all ye know on earth, and all ye need to know' -- and what you are left with is the "explanatory gap."


-- SH




2009-04-21
The 'Explanatory Gap'
Reply to Jamie Wallace
Derek: http://cogprints.org/231/0/199712004.html - that's Block on the very subject here at PhilPapers.

Steve: I think awareness and feeling are pretty readily distinguishable, just to pick at one of your claims. Blindsight, anyone? In general, we're just the people whose job to it is to probe the differences/relations between all these terms you list (and the others), and I don't think it's so straightforwardly all lumped under any one of them (or any single other term of any interest).



2009-04-21
The 'Explanatory Gap'
Reply to Sam Coleman

ON NOT BLAMING THE MESSENGER 

SC:  "we're just the people whose job to it is to probe the differences/relations between all these terms you list (and the others), and I don't think it's so straightforwardly all lumped under any one of them (or any single other term of any interest)."

Yes, it is philosophers into whose unfortunate laps the mind/body falls (although the explanatory gap is really cognitive science's -- i.e., reverse bioengineerings': no point blaming the messenger). 

But in fussing with all these trivial variants and differences (real and notional), the messengers are just toying with the envelope instead of reading out the message, loud and clear

SC:  "http://cogprints.org/231/0/199712004.html - that's Block on the very subject here at PhilPapers." 

I know: that's why I posted the URL (as well as the BBS Call for Commentators that I posted in 1994, when I was editing BBS, the journal that published it!).

SC:  "I think awareness and feeling are pretty readily distinguishable"

Are you aware of anything that it does not feel like something to be aware of?  Do you feel (as opposed to just funct) anything your are not aware of?

If it didn't feel like something to be aware of it, what would be left of the "awareness"? 

What does it add to "I feel sad" or "I feel warm" or "I feel a rough surface" or (to change the arbitrary sensory verb) "I hear a voice or smell a smell" to say, respectively, "I am aware I feel sad" or "I am aware I feel warm" or "I am aware I feel a rough surface" or "I am aware I hear a voice or smell a smell"? or, for that matter "I am aware of my sadness" or "I am aware of the warmth" or "I am aware of the roughness of the surface" or "I am aware of the sound of a voice or the smell of the smell"?

To me this is all massaging and permuting just one thing: That there is feeling going on (and the only variation is its content, i.e., what you happen to be feeling). (Hence that you are feeling at all, and only that, is the real mystery: How/why is there feeling rather than just functing when an organism feels (say, pain)?)

Ditto for all the "2nd-order" stuff everyone loves to get excited about and to treat as if it were something substantively different -- rather than just another form of content that comes with the territory (of being able to feel at all): The rest is just about what and how much one can feel (which is how sensation grades into perception and cognition; the functional know-how increases, and with it, mysteriously, the accompanying feeling):

The monkey certainly feels what it feels like to look at a mirror: that just feels like what it feels like to look at another monkey. 

The chimp can feel more: Both monkey and chimp are able to feel the difference between what it feels like to touch their own arm versus touching someone else's arm; but only the chimp (and not the monkey) can feel what it feels like to see his own face, as opposed to someone else's face. 

The underlying functing in both cases -- i.e. the reverse-engineering of the causal system that gives both monkey and chimp the know-how to do all the things they can [or can't] do with images of faces in mirrors, whether their own or someone else's -- is fully within cognitive science's reach. 

But not how/why it feels like something to be able to do all that. 

And that, again, is why feeling -- and nothing else -- is at the heart of the M/B problem and the "explanatory gap."

Yes, there is the possibility of a certain recursivity, such as feeling what it feels like to see my own face in the mirror, or feeling what it feels like to see a monkey see his own face in the mirror, or, if there is a mirror behind the monkey, feeling what it feels like to see the monkey see the monkey seeing himself in the mirror, and so on, for an infinity of trivially higher "orders," given sufficient mirrors. (You can do all this if you have the "mirror neurons" to mind-read with.)

By the very same token (no less trivial, though interestingly instantiated in language), I can feel blue [sad]; I can feel "I am feeling blue"; and I can feel "I am feeling that I am feeling blue" etc. etc.

All of these niceties may be nice to fiddle with, but the question raised in this thread was: How to explain it? And the "it" is the fact that we feel at all. Solve that (insoluble) problem and all the niceties come with the territory, and are a piece of cake. But if the gap persists, reveling instead in the (trivial) niceties alone gets us nowhere fast (in what is basically just a hermeneutical hall of mirrors).

What is distinguishable is feeling this vs. feeling that: There is something it feels like to feel sad, to touch velvet, to see red, to feel "blue", to feel thirsty, to hear Mozart, to want attention, to recognize yourself in the mirror, to understand the meaning of "justice" (or "qualia"!)...

The usual mistake that is made is to conflate consciousness itself (feeling) with (1) consciousness of something in particular ("the worm can feel something, but can it feel what we feel?"), (2) "degree of consciousness" ("how much can the worm feel?"), (3) "self"-consciousness ("can the worm feel what it feels like to do a cartesian cogito?"), (4) "higher-order consciousness" ("can the worm feel that it feels that it feels?"). These are all cases of feeling, but they all feel different (sometimes subtly -- just a JND in feeling space). Forget the differences. What we are trying to explain is how/why they are felt at all (rather than just functed, dynamically, adaptively, but feelinglessly).

SC:  "Blindsight, anyone?"

Blindsight is optokinetic functing without felt seeing. As such, our underlying question could be reforumulated as "how/why is seeing seeing rather than just blindsighted optical functing?

This again confirms that it is the presence/absence of feeling that is (and always was) the real "hard" problem.

(But since blind-sighted people are not Zombies, it is not true that they feel nothing at all; hence when they successfully "blind-see" something it is not that they are able to do it entirely unfeelingly; it is just that their accompanying feeling is not visual. Sometimes it takes the form of a felt sensorimotor inclination to point in this direction rather than that; or a felt shaping of one's hands in preparation for reaching for something small and round, rather than large and flat; or just a hunch that the thing is green rather than blue, even though one cannot see a thing.)

So blind-seers still feel; it feels like something to blind-see; it is just that the quality of what they blind-see -- what it feels like -- is not visual.

Moreover, there is a lot that blind-seers cannot do that seers can. So their functing (know-how) is not equivalent to that of seers (and in that respect they are just plain blind).

Subtle point: Before we start to feel too triumphant about what separates us seers from blind-seers, let us recall that most of our know-how is likewise delivered to us on a platter by our brains, just as the blind-seer's inclination to point here rather than there is. We take all of this for granted, and take the accompanying feeling to be some sort of proof of the fact that we are the ones doing the underlying work, whereas all it is is passive feeling, absent the underlying functing. (And that's a lot closer to what it looks like when we attempt a causal explanation: The real work is the functing, and the accompanying feeling is just floating there, a sop...)

In sum, blindsight simply reaffirms the preplexing role played by the presence or absence of feeling alongside our functing, and our inability to explain what independent causal role it plays (because it doesn't).

-- SH




2009-04-21
The 'Explanatory Gap'
Reply to Jamie Wallace
I have only skimmed bits of the Block article so far but my eye did fall on this comment: "No doubt there was a time when people were less introspective than some of us are now."

I started to wonder exactly when this introspection trend set in. Was it Freud maybe?  Or Proust? Or perhaps the Romantics (they spent a lot of time thinking about the self). But Rousseau, in the eighteenth century, wasn't exactly a slacker in the introspection field, given his Confessions. And then, in the seventeenth century, we have Pascal. And what about all those medieval monks in their cells? There must surely have been a bit of introspecting going on there. And we could even go back to Augustine of Hippo, who also wrote his Confessions. Where in fact do we stop?  Buddha was obviously a respectable introspector - so much so that he decided, if I am not mistaken, that the self is a delusion (which might possibly have baffled the monkeys in the mirror experiment...) And, after all, who knows what went on in prehistoric times? They can't have been spearing woolly mammoths all the time, and there were those long winter nights in their caves.

So I'm not wholly persuaded we can claim to be ahead of the game in introspection.

DA

2009-04-22
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan, while you might be right (and I think you are) that many spurious and unhelpful distinctions have been drawn in the literature, it simply is not the case that the term 'feeling' can cover for all of them.  A first person point of view necessarily entails a spatial reference frame centered in the head of an individual, and a temporal reference frame centered in the psychological present.  These structural features of the phenomenological world are not addressed at all by use of "feeling".  In general, a lot of philosophical discussion of late has used a terribly anaemic view of experience that has almost been reduced to the sensation of redness and pain.  My phenomenological world of experience is big, rich, and wholly inadequately described by this qualia talk.  We need to be careful in our choice of terms, but we can not sweep it all under the carpet either.

2009-04-22
The 'Explanatory Gap'
Reply to Stevan Harnad
IS THE "EXPLANATORY GAP" AN ILL-POSED PROBLEM? 


SH: What is distinguishable is feeling this vs. feeling that: There is something it feels like to feel sad, to touch velvet, to see red, to feel "blue", to feel thirsty, to hear Mozart, to want attention, to recognize yourself in the mirror, to understand the meaning of "justice" (or "qualia"!)...

Isn't asking for a causal explanation of consciousness like asking for a causal explanation of the universe? In an important sense, our understanding of consciousness (feeling, if you like) is similar to our understanding of space-time. In basic physics, space-time is a fundamental, unexplained concept. The goal of theoretical physics has been to provide a useful systematic account of the observable events/content of space-time. Little effort has been spent on trying to explain the sheer existence of space-time. Similarly, in the exploration of mind, a fruitful theoretical path would be to accept (at least initially) the existence of consciousness as an unexplained fundamental concept in the neuroscience of phenomenal experience, and focus our theoretical efforts on understanding the self-communicated content of consciousness within a biophysical framework. Fortunately, much of the content of consciousness/feeling can be distinguished, described, compared, publically represented, and analyzed. We fall into a conceptual tar pit when questions about the existence of consciousness (feeling) get entangled with questions that are essentially about the content of consciousness.


If we are to get a better understanding of consciousness it makes good sense to adopt the notion that consciousness is a property of the living brain. The key question, however, is not "How can the brain embody consciousness?", but rather "How does the brain create the gloriously varied content of consciousness?"  This suggests that we devote much more attention to specifying putative neuronal mechanisms that can be demonstrated to generate activities in the brain that are analogous in their salient aspects to the rich phenomenal content of consciousness/feelings.

In posing the "hard problem", it is customary to plunge immediately into the realm of the ineffable --- the smell of a rose, the experience of red, the feel of a toothache. I think this is a mistake. There are many kinds of subjective experiences that are decomposable, that lend themselves to overt representation, and are measurable. The phenomenal experience of a triangular shape is just as much an instance of feeling as is the smell of a rose. But, unlike the smell of a rose, the elementary properties and detailed spatial relationships in our feeling of a triangle can be displayed in an external expression which others can observe and examine. It is instructive to reflect that although the 2-D images on our retinas constrain our perceptions, the structured content of consciousness is significantly different than our proximal retinal stimuli. It is this gap between measurable sensory excitation and vivid phenomenal experience that provides our best opportunity to understand the biological machinery of the human brain that creates conscious content. For more on these issues, see:

http://eprints.assc.caltech.edu/355/

http://eprints.assc.caltech.edu/468/

 



2009-04-22
The 'Explanatory Gap'
Reply to Jamie Wallace

We can say that what we feel is what we really are.  I say that the consciousness in the body is hardwired or "hard-pathed" for constant feeling.  True we feel pain, hot, cold and a touch, but as you sit in this chair your feet can feel the floor, your body can feel the chair and your forehead can feel the light overhead. These may be mild feelings but feelings none the less.

We can say that autonomic feeling is the hardwired thought in all of us: "I"


"What does it add to "I feel sad" or "I feel warm" or "I feel a rough surface" or (to change the arbitrary sensory verb) " I hear a voice or smell a smell" to say, respectively, "I am aware I feel sad" or "I am aware I feel warm" or "I am aware I feel a rough surface" or "I am aware I hear a voice or smell a smell"? or, for that matter "I am aware of my sadness" or "I am aware of the warmth" or "I am aware of the roughness of the surface" or "I am aware of the sound of a voice or the smell of the smell"?"


We seem to be baffled that hardwired phenomenol consciousness or feeling is outside of brain consciousness which is configurably wired but there is really no mystery at all.

The painful feelings we feel in the body we call "bad" and the pleasurable feelings we call "good".

Our own ego or sense of pride restricts us from understanding that every form of nerve activity in the body is also an idea or thought.

Like an old coffee percolator, the central hardwired idea from the body bubbles up into the brain like a fountain and we call this thought "I Know"


2009-04-22
The 'Explanatory Gap'
Reply to Jamie Wallace

The view from everywhere


As it happens, from the window of the room in which I'm writing, there's a terrific view, out across the Forth Valley in central Scotland, with the Ochil Hills, the Wallace Monument, Stirling Castle, and more. (Today it's nice and sunny, but not very clear.) Now, I live alone, and unless I let anyone in, I'm the only one who gets to see this precise view (though anyone nearby can get a very similar view, of course). This is, in my opinion, not just analogous to, but the same principle as, the privacy that exists inside my head.

When it comes to minds, however, we have to consider not just geographical location, but also physiological and psychological factors. In other words, to experience what I'm experiencing, it is not enough that you visit me and take a look out through my window -- not even if we could somehow occupy the same place at the same time. Our eyesight might differ, and our thought processes almost certainly will -- even if they don't, we probably can't ever be certain of that.

In one sense, the view still exists even if I'm not home, and nobody is actually seeing it. Of course, that makes it of no interest whatsoever, but surely, there is a view from everywhere, from all points, even though only a very tiny subset of all views is either accessible to us or of any interest.

We can't share the same viewpoint in absolute terms, but the accessibility of a point of view is a matter of degree: if you visit me we can both stand at my window and share almost identical views out over the valley. Our physical locations will be not quite the same, and there are those other factors already mentioned, but any two members of the same species will have a great deal in common, so only some of the non-geographical factors will be divisive, and in fact it might be surmised that the great majority will be shared.

Stevan Harnad clearly distinguishes feelings from functions but then asks why we feel. I don't believe non-functions necessarily need reasons to be, and I say there's no more need for a reason why we feel, than that there is a view from my window regardless of whether anyone is currently viewing it or not. Consciousness is nothing more nor less than a point of view. To say that there is a view from all points is therefore panpsychism, but of a sort that's trivially true: what matters is the significance of any particular viewpoint, whether that's expressed in geographical, physiological, psychological or social terms. If there was a rock sitting on my windowsill, it would, in principle, have approximately the same view that I do. Of course, the fact that it has no eyes (not to mention the brain, etc) makes that fact of no interest. But the viewpoint exists all the same. Thomas Nagel characterised absolute objectivity as "the view from nowhere". I contend that the corrective is the view from everywhere. This is the sense in which consciousness is a fundamental fact of nature, and it recognises our embodiment and embeddedness, our unity with the rest of the universe. On the negative side, it means accepting that, in absolutely objective terms, "consciousness" and "free will" are meaningless. That's what's wrong with the view from nowhere: absolute objectivity is just not good enough.

2009-04-22
The 'Explanatory Gap'
Reply to Derek Allan
Derek, you, on one hand, and most of the other contributors here, on the other, are in my opinion talking past each other. The reason is that the rest of us (as a generalisation) are attacking consciousness (or trying to), while you are attacking analytic philosophy and its approach to consciousness (or your idea of that). I submit that, unless there is a fairly substantial change in direction, there is no chance of common ground being found, and so the argument is futile. Or, to be more specific, your argument with analytic philosophy is futile. This thread, on the other hand, seems to me quite fruitful, and I'm toying with the idea of having another stab at propounding my own views, but I'm afraid I will not be addressing any of your points directly, as I prefer to engage with what seem to me to be positive contributions, or attempts at such.

2009-04-22
The 'Explanatory Gap'
Reply to Stevan Harnad
Mr. Harnad:  "It seems to me I can know perfectly well (and cartesianly, hence incorrigibly) that I have a toothache, regardless of whether I have a tooth, or even a mouth, let alone whether anyone else is measuring or can measure anything, on my body or anywhere else, and whether that measurement does or does not correlate with the existence or locus of my tooth (or mouth) or pain."

I don't know about the "Cartesionally." I grant that you can know that you have a toothache, but not if you don't have a tooth, unless you want to define "toothache" to mean something other than an aching tooth.  If so, then what is a toothache?

More broadly, what are feelings?

How do you know they exist?

How do you know they don't cause anything?

And how do you know they correlate with brain functions?  (I still see a contradiction in the view that they could be only subjectively knowable, and yet known to correlate with objective processes.)

Without answers to these four questions, I'm afraid I will not see the sense of epiphenomenalism.

Mr. Harnad:  "It's exactly the same problem (and I really mean exactly) when you are asking about how/why seeing blue feels like something or you are asking about why/how going into a blue funk feels like something."

Yes, as I noted, we are in the same philosophical boat whether we are talking about inner or outer perceptions.  But the point I was making when you offered this response was about our current scientific understanding of the relevant processes.  Philosophically, yes, it is the same situation, but I would not call it a "problem."

So, again, to the broad question, why are some functions felt?, I would answer, what are you talking about? 

For one thing, I would not say that these functions are felt.  That would imply that there is something else apart from the functions which is feeling them.  That would be to relocate to locus of feeling, and I see no possible destination.

I see nothing problematic about regarding feelings as neurological functions interacting with other neurological functions, just as I see nothing problematic about regarding colors as wavelengths of light interacting with neurological functions.  The idea that these functions could occur without the feeling of color vision implies a notion of feeling which I do not understand.
... (expand)

2009-04-22
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan,
Suppose the functing of a particular kind of brain mechanism was theoretically specified, and on the basis of its putative operating principles, one predicted the occurrence of a particular kind of feeling never experienced before. Suppose the prediction was successful and repeatable. Would you then be inclined to accept the idea that the functing of the specified brain mechanism was the biophysical aspect of the predicted feeling?


2009-04-22
The 'Explanatory Gap'
Robin

I'm sorry you find my contributions unhelpful.  They are not intended to be. Quite the contrary.

It is not quite true that I am attacking 'analytic philosophy and its approach to consciousness'. I am attacking analytic philosophy's approach to consciousness - not quite the same thing (though I do have strong reservations about its approach in my own primary field - the philosophy of art.)

Personally, I think the question of consciousness is an extremely important one (though by no means new, as analytic philosophy sometimes seems to imply). But I also think it is an extremely difficult and profound problem, and one about which we are probably, at best, only ever likely to achieve partial undertandings. To be brutally frank, what bothers me in a lot of what I read on the subject by analytic thinkers (Block is one example) is what strikes me as a serious trivialisation of the problem - a tendency to reduce it to a melange of everyday observations, results of so-called 'introspection', ideas borrowed from Hollywood movies (!), and a hodge-podge of medico-psychological findings. If it were not for the rather formidable jargon in which it is often dressed up, I think one would be more inclined to question whether it is in fact philosophy at all.

You might well think (as you imply in your email): "Well, if that is what you think, there is no possibility of dialogue". But that would only be so if all I was doing was making such judgments. On the contrary, however, I have been raising specific philosophical questions and advancing specific philosophical arguments. Certainly, my comments often tend to question the very basis of the analytic approach to the issue, but you surely wouldn't want to talk only to those who accept all your assumptions? Indeed, I'd suggest it's precisely people like me you need to be dialoguing with from time to time - the questioners, the non-initiates. The experience can surely only make your arguments more robust and your thinking more sound.

Apart from that, all I can say you have not actually indicated where I have gone so seriously astray.  I'd be happy to respond to any specific criticisms you might have of the points I've raised.

I would certainly encourage you to propound your own views as you say you might. I might well comment, but you can always just ignore me.

DA


2009-04-22
The 'Explanatory Gap'
Reply to Fred Cummins
FC: "many spurious and unhelpful distinctions have been drawn in the literature [but]  'feeling' can[not] cover for all of them. My phenomenological world of experience is big, rich..."
You feel a lot of different things, but the (one and only) mind/body problem is the fact that you feel at all. And the (one and only) explanatory gap is that there is no causal explanation of how or why you feel, rather than just "funct." (And I argue that there cannot be a causal explanation because there is no causal room -- unless telekinetic dualism is true, and it isn't.)


-- SH




2009-04-22
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "a fruitful theoretical path would be to accept (at least initially) the existence of consciousness as an unexplained fundamental concept" 
In other words, accept that we do feel, and that we cannot explain how or why. I agree. It's true, so we might as well accept it.
AT: "much of the content of consciousness/feeling can be distinguished, described, compared, publically represented, and analyzed"
What we feel can be described, and its brain correlates (which are almost certainly also its causes) can be found and analyzed. Reverse-engineering those will explain, functionally and unproblematically, everything we do, and are able to do. But it will not explain how or why any of that functing underlying our behavioral capacities is felt. And although we cannot do anything about that, it is definitely a (profound) explanatory gap.
AT: "The key question [is] "How does the brain create the gloriously varied content of consciousness?"  

That question will not be answered either. We will find out how the brain generates adaptive behavioral capacity, and, given that generating that capacity also happens to feel like something, we will find out the correlates (and probable causes) of those feelings. I don't think we'll have a substantive explanation of how the brain generates feeling, but I think that there will be little doubt that it does; but not being able to explain how the brain generates feeling is the lesser problem: the fact that we cannot explain why (functionally speaking, i.e., causally speaking) the brain generates feeling is the greater problem: all those gloriously varied feelings, when all that was needed for adaptive purposes -- and all there is causal room for -- is the underlying functing. The fact that (some of) those underlying functions happen (for mysterious, unexplained reasons) to be felt just stays the dangler it is.
AT: "specifying putative neuronal mechanisms that can be demonstrated to generate activities in the brain that are analogous [to] feelings"

That is unfortunately just correlates again.

AT: "unlike the smell of a rose, the elementary properties and detailed spatial relationships in our feeling of a triangle can be displayed in an external expression which others can observe and examine" 

I'm afraid I can't agree: The geometric properties of detecting and manipulating triangles are functing, and unproblematic. What it feels like to see or imagine or manipulate a triangle, in contrast, is every bit as problematic as what it feels like to see red. (Lockean primary and secondary properties don't help here.)

-- SH




2009-04-22
The 'Explanatory Gap'
RF: "I don't believe non-functions necessarily need reasons to be"
You are not surprised that organisms are not just the Darwinian adaptive machines (functors) that they ought to be (based on everything else we know and can explain)? And you are not bothered that this cannot be explained in the usual (functional) way everything else in the universe can be?
RF: "Consciousness is nothing more nor less than a point of view"
Isn't viewing a felt function? Assuming that you would not say that a camera has a "point of view," does our having one not deserve an explanation?

RF: "'consciousness' and 'free will' are meaningless" 
The fact that we feel (i.e., are conscious) is not only not meaningless, but it is perfectly true. The fact that feeling cannot have any independent causal power (unless telekinetic dualism is true, which it isn't) is likewise true, and perfectly meaningful, if not especially satisfying, if one is looking for an explanation of how and why we feel...

-- SH




2009-04-22
The 'Explanatory Gap'
JS: "you can know that you have a toothache, but not if you don't have a tooth"
No? What about referred pain, or phantom limb pain, or hysterical pain, or hallucinated pain?
JS: "what are feelings?"

Everyone who feels knows that, even if they effect not to.


Please see earlier in the thread about ostensive definition and knowing what it "feels like to feel."
JS: "How do you know they exist?"

I pinch myself occasionally: Try it.
JS: "How do you know they don't cause anything?"

I know they feel as if they cause things (e.g., when I move my finger because I feel like it). But I notice that there are 4 fundamental forces in the universe, and that they cover my brain's every move, with no remaining degrees of freedom. There's no room for a 5th force unless telekinetic dualism is true (and it's not).
JS: "And how do you know they correlate with brain functions?"  

Classical psychophysics: as my anxiety level goes up, my GSR goes up, and vice versa. (That does not prove correlation, because there's always room for skepticism as well as incommensurability arguments, but it's good enough for a realist and a naturalist. It's not good enough to close the explanatory gap, though because it's just correlation, not causation.)
JS: "to the broad question, why are some functions felt?, I would answer, what are you talking about?"  

No reply, if the difference between what happens to you when I pinch you and what does not happen (presumably) to a robot if I pinch it does not make it crystal clear to you exactly what I am talking about.
JS: "I would not say that these functions are felt.  That would imply that there is something else apart from the functions which is feeling them." 

Well what would you say that pinching you was and pinching the robot (or you under anesthesia) wasn't
JS: "I see nothing problematic about regarding feelings as neurological functions interacting with other neurological functions, just as I see nothing problematic about regarding colors as wavelengths of light interacting with neurological functions.  The idea that these functions could occur without the feeling of color vision implies a notion of feeling which I do not understand."

Where you are not just seeing truths (as I too see them), you seem to be seeing necessary truths, whereas all I see is unexplained truths -- and truths for which it seems perfectly reasonable (by analogy with everything else) to feel as if they call for explanation...


-- SH




2009-04-22
The 'Explanatory Gap'
Reply to Arnold Trehub
I just wanted to comment on one paragraph of Arnold's post where he writes:

"If we are to get a better understanding of consciousness it makes good sense to adopt the notion that consciousness is a property of the living brain. The key question, however, is not "How can the brain embody consciousness?", but rather "How does the brain create the gloriously varied content of consciousness?"  This suggests that we devote much more attention to specifying putative neuronal mechanisms that can be demonstrated to generate activities in the brain that are analogous in their salient aspects to the rich phenomenal content of consciousness/feelings."

The key problem in that paragraph is the word 'create'.  In what sense might the brain create "the gloriously varied content of consciousness."?  It is probably safe enough to say that the brain accompanies and makes possible this varied content - given that we suppose the (brain) dead don't have said content. But create? - that is another matter. The claim seems to rule out the function of anything we might call a mind (since what purpose would it serve if the brain did all the creating?)  Are we in a position to do that in such a confident manner?

DA

2009-04-24
The 'Explanatory Gap'
Reply to Derek Allan
DA: "The key problem in that paragraph is the word 'create'.  In what sense might the brain create ''the gloriously varied content of consciousness'?"  

I use the word "create" in the sense of the brain constructing the phenomenal content of consciousness. I probably should have used the word "construct" instead of "create"


DA: "The claim seems to rule out the function of anything we might call a mind (since what purpose would it serve if the brain did all the creating?)  Are we in a position to do that in such a confident manner?"

We are getting there. As I wrote in The Cognitive Brain (MIT Press 1991), "It is the total specific content of cognition, the current physical state of specialized mechanisms in an individual brain shaped by encounters in a world both real and imagined, that constitutes a mind." You, of course, might disagree. But open discussion of the evidence pro and con can only help our enterprise of understanding the mind.







2009-04-24
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: You are not surprised that organisms are not just the Darwinian adaptive machines (functors) that they ought to be (based on everything else we know and can explain)?

But in strictly objective terms that's exactly what they are: mere functors. There's no such explanatory gap in science (genuine science, that is, as opposed to fictional or pseudo-science). Only philosophers can tie themselves in such knots. (And I say that as someone who considers himself a "natural" philosopher, who missed his vocation, but who, despite being now middle-aged, is going back into full-time education in the hope of spending the last few years of his working life as a professional philosopher.)

SH: And you are not bothered that this cannot be explained in the usual (functional) way everything else in the universe can be?

Here's an explanation, or at least the outline of one, though whether you will find it satisfying is another matter: There's a sort of bootstrapping that will take place during the development of any sufficiently intelligent social species (including any sufficiently "intelligent", "social", robots) whereby individuals develop the concepts of self and other, become self-conscious, and then wonder what consciousness is. But to attribute consciousness is basically "just" to apply the self-model to the other, and to consider oneself as conscious is to reflect on sensory information streams that are, in principle, no different from those of a robot. (And/or perhaps to reflect on such reflection. I used inter/subjective concepts (such as concept) for concision in that explanation but they can be avoided at the cost of multiplying the word count many times.) So I'm sympathetic to Dennett's views and to the higher order thought scenario. To attribute consciousness is to identify with the attributee (or at least to view it as something with which such identification is in principle possible), so at the highest level, to consider the universe to have conscious aspects is to identify with it. I call this "inter/subjective panpsychism".

SH: Isn't viewing a felt function? Assuming that you would not say that a camera has a "point of view," does our having one not deserve an explanation?

As I tried to suggest in "the view from everywhere", I would say that a camera does have a point of view (as does the rock on my windowsill), it's just that, lacking psychological and social factors, it's not one that interests us. (Actually, we are interested in the camera's POV sometimes, or we'd never look at photographs.)

SH: The fact that we feel (i.e., are conscious) is not only not meaningless, but it is perfectly true.

It's true in inter/subjective terms, but meaningless in strictly objective terms. Consciousness is an inter/subjective concept, so to try to deal with it in strictly objective terms is a category error. That should not, however, be taken to imply that inter/subjectivity is inferior to objectivity (or vice versa). We need to choose the horses for the courses. This is like a dual aspect theory, but instead of metaphysical aspects, we have psychological perspectives. As with the aspects, though, inter/subjective and objective perspectives are equally valid in general terms, while each is more applicable in particular contexts. In science, consciousness is out of context: that is a language game in which the word has no use and therefore no meaning (Wittgenstein).

2009-04-24
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "The key question [is] "How does the brain create the gloriously varied content of consciousness?"  
          SH: That question will not be answered either. We will find out how the brain generates adaptive behavioral capacity, and, given that generating               that capacity also happens to feel like something, we will find out the correlates (and probable causes) of those feelings.

We can do more than find out how the brain generates adaptive behavioral capacity. Importantly, we can also find out how the human brain generates its phenomenal representation of the world to which it has to adapt.


AT: "specifying putative neuronal mechanisms that can be demonstrated to generate activities in the brain that are analogous [to] feelings"

          SH: That is unfortunately just correlates again.

I don't agree. Brain analogs of feelings (conscious content) are much more than simple correlates of feelings, The significant distinction is that correlates which are not also analogs of feelings have only the relationship of systematic co-occurrence with the feelings, while analogs of the feelings have at least some properties that are similar to salient aspects of the feelings. The difference is illustrated in these experimental findings:

1. In binocular rivalry experiments it has been found that a particular neuron in the visual system fires above threshold when one of the competing stimuli is perceived, and stops firing when the other stimulus is perceived. This is an example of a simple correlation between a feeling and a brain event.

2. In studies of the perception of equal sized objects in 2D perspective drawings it has been found that the "distant" object in the perspective drawing is judged to be larger than its equal sized "nearer" object. Brain scans (fMRI) taken at the same time show an enlargement of the area of neuronal activity in the primary visual cortex which corresponds to the ratio of the judged/perceived size of the far object compared to the near object. This is an example of an analog relationship between a feeling and a brain event.

My contention is that brain analogs of this kind are much more informative than mere correlates and provide powerful empirical data that we can use to test our theoretical models. 

AT: "unlike the smell of a rose, the elementary properties and detailed spatial relationships in our feeling of a triangle can be displayed in an external expression which others can observe and examine" 

         SH: I'm afraid I can't agree: The geometric properties of detecting and manipulating triangles are functing, and unproblematic. What it feels like            to see or imagine or manipulate a triangle, in contrast, is every bit as problematic as what it feels like to see red. (Lockean primary and                      secondary properties don't help here.)

I'm not talking here about detecting and manipulating a triangle. I'm talking about our external expression (e.g., in a verbal report or drawing) of our internal phenomenal representation (feeling) of a triangle. The problem of getting a good external expression of what it feels like to see red is much harder. 



2009-04-25
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: 'I use the word "create" in the sense of the brain constructing the phenomenal content of consciousness. I probably should have used the word "construct" instead of "create"''

This does not answer my objection. The issue is whether the brain simply makes possible what the mind does, or does it all itself.  (The mere fact - e.g. - that brain states are observed to change as experience changes doesn't resolve this either way.)  


AT: ""It is the total specific content of cognition, the current physical state of specialized mechanisms in an individual brain shaped by encounters in a world both real and imagined, that constitutes a mind." You, of course, might disagree. But open discussion of the evidence pro and con can only help our enterprise of understanding the mind."

I do disagree. But I certainly don't oppose "open discussion of the evidence pro and con".

What I do oppose is the tendency I note quite often to assume what has not yet been established; and the claim that 'we are getting there' strikes me as being of this kind. We may be getting somewhere but is it to a position in which human experience can be completely explained neurologically (which is, I gather, where the 'there' is?)

This is also my objection to the phrase 'explanatory gap'.  It assumes/implies that we are on the right road but haven't quite reached our desired destination yet.  But the 'neurological' path may in fact be the wrong road, and may be taking us to a comprehensive dead end, not just to a temporary 'gap'.  How do we know that this is not the case? 

DA 

2009-04-25
The 'Explanatory Gap'
Reply to Jamie Wallace
Let me expand a little on my objection to the phrase 'explanatory gap'.

When, Watson, Crick etc worked how DNA works they had certainly filled such a gap. I am not a biologist, and not at all knowledgeable in this field but prior to their breakthrough, everyone would, I assume, have thought that, just as everything known so far about the topic had been explained in purely physical terms, so the full explanation, when it came, would also be in purely physical terms.  So Crick and co had, in effect, reached a certain point along a road and simply (!) needed to go the rest of the way.

But the so-called 'explanatory gap' in the area we are discussing is not like that at all - or we cannot assume it is. We already know a lot about how the brain works in physical terms (I gather we do - I don't really bother to keep up). But we cannot for a moment assume that simply knowing more about this will explain the operations of what we call 'the mind' and key related issues such as consciousness etc.  Those issues may raise problems that are not explicable in physical terms at all. (Personally, I doubt very much that they are.)

The situation is made worse by the fact that we don't even seem to know with any clarity what we are looking for.  Crick and co did know that: they knew they needed an explanation of the mechanisms of transmission of genetic characteristics, mutations etc. (This is only amateur science - others might know the terminology better than I.) But in our field there is not even any clear, generally agreed idea of what 'mind', 'consciousness', 'feelings' etc might mean.  My own reading of the literature - limited admittedly - gives me the impression that the notion of consciousness, for example, tends to be used very vaguely indeed, as if we all 'just know' what it means - until of course we are asked to state it clearly and then (rather like Augustine with the problem of time) it just dissolves into thin air or confusion or both.

In these circumstances, talking about an 'explanatory gap' strikes me as a case of whistling in the dark - a determination to believe at all costs that one is on the right track when there is no clear basis for such an assumption and when one doesn't even know with any clarity where one wants to end up...

DA



2009-04-25
The 'Explanatory Gap'
Reply to Arnold Trehub

ON PSYCHOPHYSICAL INCOMMENSURABILITY AND SENSORY-SEMANTIC DUALS

AT: "brain analogs... are much more informative than mere correlates"

I am going to think out loud about "duals" now, because I am not really sure yet what implication I want to draw from it for the question of psychophysical "analogs" vs "correlates." 

The question is interesting (and Saul Kripke gave it some thought in the '70s when he expressed some skepticism about the coherence, hence he very possibility, of the notion of "spectrum inversion": Could you and I really use exactly the same language, indistinguishably, and live and interact indistinguishably in the world, while (unbeknownst to us) green looks (i.e., feels) to me the way red does to you, and vice versa? 

Kripke thought the answer was no, because with that simple swap would come an infinity of other associated similarity relations, all of which would likewise have to be systematically adjusted to preserve the coherence of what we say as well as do in the world. ("Green" looks more like blue, "red" looks more like purple, etc.) 

At the time, I agreed, because I had come to much the same conclusion about semantic swapping: Would a book still be systematically interpretable if every token of "less" were interpreted to mean "more" and vice versa? (I don't mean just making a swap between the two arbitrary terms we use, but between their intended meanings, while preserving the usage of the terms exactly as they are used now.) 

I was pretty sure that the swap would run into detectable trouble quickly for the simple reason that "less" and "more" are not formal "duals" the way some terms and operations are in mathematics and logic. My intuition -- though I could not prove it -- was that almost all seemingly local pairwise swaps like less/more would eventually require systematic swaps of countless other opposing or contradictory or dependent terms ("I prefer/disprefer having less/more money..."), eventually even true/false, and that standard English could not bear the weight of such a pervasive semantic swap and still yield a coherent systematic interpretation of all of our verbal discourse. And that's even before we ask whether the semantic swap could also preserve the coherence between our verbal discourse and our actions in the world.

But since then I've come to a more radical view about meaning itself, according to which the only difference between a text (a string of symbols P instantiated in a static book or a dynamic computer) that is systematically interpretable as meaning something, but has no "intrinsic intentionality" (in Searle's sense) and a text (say, a string of symbols P instantiated in the brain of a conscious person thinking the thought that P) is that it feels like something to be the person thinking the thought that P, whereas it feels like nothing to be the book or the computer instantiating the symbols string). Systematic interpretability ("meaningfulness") in both cases, but (intrinsic) meaning only in the (felt) one.

I further distinguish meaning, in this felt sense, from mere grounding, which is yet another property that a mere book or computer lacks: Only a robot that could pass the robotic Turing Test (TT; the capacity to speak and act indistinguishably from a person to a person, for a lifetime) would have grounded symbols. But if the robot did not feel, it still would not have symbols with intrinsic "intentionality"; it would still be more like a book or computer, whose sentences are systematically interpretable but mean nothing except in the mind of a conscious (i.e., feeling) user. (It is of course an open and completely undecidable question whether a TT-passing robot would or would not actually feel, because of the other-minds problem. I think it would -- but I have no idea how or why!)

But this radical equation of intrinsic meaning (as opposed to mere systematic interpretability) with feeling would make Kripke's observations about color-swapping (i.e., feeling-swapping) and my observations about meaning-swapping into one and the same thing.

It is not only that verbal descriptions fall short of feelings in the way that verbal descriptions fall short of pictures, but that feelings (say, feelings of greater or lesser intensity) and whatever the feelings are "about" (in the sense that they are caused by them and they somehow appertain to them) are incommensurable: The relation between an increase in a physical property and its felt quality (e.g., an increase in physical intensity and a felt increase in intensity) is a systematic (and potentially very elaborate and complicated) correlation (more with more and less with less), but does it even make sense to say it is a "resemblance"?

For this reason, brain "analogs" too are just systematic correlates insofar as felt quality is concerned. I may have (1) a neuron in my brain whose intensity (or frequency) of firing is in direct proportion to (2) the intensity of an external stimulus (say, the amplitude of a sinusoid at 440 hz). In addition, there is the usual log-linear psychophysical relationship between the stimulus intensity (2)  and (3) my intensity ratings. The stimulus intensity (2)  and the neuronal intensity (1)  are clearly in an analog relationship. So are the stimulus intensity (2) and my intensity ratings (3) (as rated on a 1-10 scale, say). And so are the neuronal intensity (1) and my intensity ratings (3). But you could get all three of those measurements, hence all three of those correlations, out of an unfeeling robot. (I could build one already today.) How does (4) the actual feeling of the intensity figure in all this?

You want to say that my intensity ratings are based upon an "analog" of that felt intensity. Higher rated intensity is systematically correlated with higher felt intensity, and lower rated intensity is correlated with lower felt intensity. But in what way does a higher intensity rating RESEMBLE a higher intensity feeling? Is the rating not just a notational convention I use, like saying that "higher" sound-frequencies are "higher"? (They're not really higher, like higher in the sky, are they?) (Same thing is true if I instead use the "analog" convention of matching the felt frequency with how high I raise my hand. And if it's instead an involuntary reflex rather than a voluntary convention that is causing the analog response -- say, light pupillary dilation in response to increased light intensity -- then the correlated feeling is even more side-lined!)

The members of our species (almost certainly) all share roughly the same feelings. So we can agree upon, share and understand naming conventions that correlate systematically with those shared feelings. I use "hot" for feeling hot and "cold" for feeling cold, because we have both felt those feelings and we share the convention on what we jointly agree to call what. 

That external corrective constraint gets us out of another kind of incorrigibility: Wittgenstein pointed out in his argument that there could not be a purely private language because then there could be no error-correction, hence there would be no way for me to know whether (i) I was indeed using the same word systematically to refer to the same feeling on every occasion or (ii) it merely felt as if I was doing so, whereas I was actually using the words arbitrarily, and my memories were simply deceiving me.

So feelings are clearly deceiving if we are trying to "name" them systematically all on our own. But the only thing that social conventions can correct is their grounding: What we call (and do with) what, when. I can't know for sure what you are feeling, but if you described yourself as feeling "hot" when the temperature had gone down, and as feeling "happy" when you had just received some bad news, I would suspect something was amiss.

Those are clearly just correlations, however. Words are not analogs of feelings, they are just arbitrary labels for them. And although a verbal description of a picture can describe the picture as minutely as we like, it is still not an analog of the picture, just a symbolic description that can be given a systematic and coherent interpretation, both in words and actions (if it is TT-grounded).

Yet we all know it can't be symbolic descriptions all the way down: Some of our words have to have been learned from (grounded in) direct sensorimotor (i.e., robotic) experience. "How/why did that experience have to be felt experience?" That's the question we can't answer; the explanatory gap. And a lemma to that unanswered question is: How/why did that felt experience have to resemble what is was about -- as opposed to merely feeling like it resembles what it is about? Why isn't grounding just functing (e.g., the cerebral substrate that enables us to do and say whatever needs to be done and said to survive, succeed and reproduce, TT-scale)? And why is there anything more to meaning than just that? 

To close with a famous example of analogs: Roger Shepard showed psychophysically that the time it takes to detect whether two shapes are different shapes or just the same shape, rotated, is proportional to the degree of rotation. This suggests that the brain is encoding the shapes in some analog form, and then doing some real-time analog rotation to test whether they match. This is all true, but as it happens the rotation occurs too fast for the subject to feel that it is happening! So here we have the same three-way correlation ( internal neural process (1) external stimulus (2), subject's outpu (3)) as in intensity judgments), but without any correlated feeling

So is the neural "analog" still to count as an analog of feeling, even when there is no feeling?

By the very same token, how is one to determine whether psychophysical data are analogs of feeling, rather than merely systematic functional correlates (especially when the explanation of how and why the correlated functions are felt at all remains a complete mystery, causally, hence functionally)? (This is the public counterpart of Wittgenstein's private problem of error.)

All this, but I still think that global systematic duals do not in general work, so neither sensory nor semantic pairwise swapping is possible (except perhaps in some local special cases) while preserving the coherence of either actions in the world or the interpretability of verbal discourse. I don't think, however, that the fact that coherent global duals are impossible, even if it is true, entails that feelings are analogs of physical properties, rather than merely systematic correlates.


2009-04-26
The 'Explanatory Gap'
Reply to Derek Allan
DA: "What I do oppose is the tendency I note quite often to assume what has not yet been established; and the claim that 'we are getting there' strikes me as being of this kind. We may be getting somewhere but is it to a position in which human experience can be completely explained neurologically (which is, I gather, where the 'there' is?) 

This is also my objection to the phrase 'explanatory gap'.  It assumes/implies that we are on the right road but haven't quite reached our desired destination yet.  But the 'neurological' path may in fact be the wrong road, and may be taking us to a comprehensive dead end, not just to a temporary 'gap'.  How do we know that this is not the case? "

I think that Derek sets the bar higher than any explanatory effort can achieve. A complete explanation of human experience is beyond the power of any intellectual approach, be it neuro-scientific, psychological, philosophical, or a combination  of all. Any candidate explanation must be tested against relevant empirical and logical evidence. For example, the hypothesized structural and dynamic properties of the neuronal mechanisms and systems detailed in The Cognitive Brain have been able to explain important aspects of our phenomenal experience that have previously eluded explanation, and have also successfully predicted phenomena that were never before experienced. I take this as good evidence that the 'neurological path' may in fact be the right road. Of course, we can't be certain/know that this is the case. But the pragmatics of scientific inquiry certainly suggests that this is a promising road to follow.





2009-04-26
The 'Explanatory Gap'
Reply to Jamie Wallace

I'd agree with Chalmers (post #2 above) that a poll would be interesting.  I would guess for example that Derek falls into category #1 and Stevan falls into category #3. 

The suggestion as to why we should feel at all is often used to aid in getting across the concept of an explanatory gap.  However, the following might also have been suggested in the literature - though I've not seen it personally and would be interested in feedback.

One can either claim phenomenal consciousness is epiphenomenal or not.  Certainly, a transistor for example changes state because of a voltage/current applied to the base.  It will never change state otherwise making downward causation impossible and p-consciousness epiphenomenal as Stevan Harnad suggests.*  We might make a similar claim of neurons though I'm aware of at least 2 reasonable objections** to this perspective in the literature and there may be others. 

If consciousness is epiphenomenal, we might claim the experience reliably corresponds to behavior as Stevan suggests.  Assuming it is epiphenomenal however, means that the phenomenon can have absolutely no influence whatsoever on behavior.  The experience can be anything, meaning it can be pain, color, auditory experience, or any category of experience as suggested for example by Chalmers (pg 6, "The Conscious Mind").  It could also be any experience NOT categorized, ones we have no subjective knowledge of whatsoever!  Note also that if it is epiphenomenal, we can't assume the phenomenon will be reported reliably.

In other words, forget about why we should experience anything at all.  If p-consciousness is epiphenomenal, we might ask for example:  Why shouldn't I experience only a whistling noise, a photism, or just pure euphoria as I drive to work, stopping at intersections and avoiding other traffic?  I could experience anything but I would still tell you that I see the car approaching in the distance and I'd still report all spatial and auditory information such as distances to oncoming traffic and sounds from horns and broken mufflers exactly as if my phenomenal experience reliably corresponded to the actual, physical information my brain was able to receive through visual and auditory sensors.  But I wouldn't experience any of that.  I might only experience something utterly uncorrelated to the actual environment.

If p-consciousness is epiphenomenal, the explanatory gap can be viewed from the perspective of, "Why should the experience produced correspond to reality instead of simply being a phenomenal experience which has no correlation whatsoever?"  If p-consciousness can't influence behavior, then this phenomenon can not make itself known by adjusting behavior nor reporting the actual phenomenon that occurs upon penalty of telekenisis as Stevan points out.

In addition to providing personal opinions, I'd be very interested if you would be so kind as to suggest papers or literature that might address this perspective.

*I'm sure there are those who would disagree with this and allow for some kind of downward causation in a conventional computer system, but it really doesn't matter.  The point regards epiphenomenal p-consciousness.

**The 2 I'm aware of:  1. The argument that nonlinear physical systems are in some way holistic/non separable (esp. Alwyn Scott) and therefore capable of downward causation and 2. quantum mechanical systems. 


2009-04-26
The 'Explanatory Gap'
Reply to David Chalk
 CORRELATION, CORRESPONDENCE AND INCOMMENSURABILITY
DC: "One can either claim phenomenal consciousness is epiphenomenal or not"  
[I'd have said "One can either claim that feelings are or are not causal"]
DC: "forget about why we should experience anything at all.  If p-consciousness is epiphenomenal...'Why should the experience produced correspond to reality instead of simply... [having] no correlation whatsoever?'"  
First, a simplified gloss:
"forget about why we should feel anything at all.  If feelings are noncausal... 'Why should they correspond to reality instead of simply... [having] no correlation whatsoever?'" 
This was the subject of the thread about correlates vs. analogs in psychophysics. "Correspondence" is a bit of a weasel word: It could refer to a reliable but arbitrary mapping or a physical isomorphism. I'd say (some) feelings were reliably correlated with (some) objects and events temporally and functionally, but that they were qualitatively incommensurable with them -- and that those were just two sides of the same coin: the noncausal status of feeling. It is always the functing that bears the weight, not the feeling.
DC: "I'd be very interested if you... suggest papers or literature that might address this perspective."
(I regret I cannot help on this score, except to add that for my part I would be grateful if pointers to the literature were also always accompanied by a simple summary of the argument that the cited work is making. Without wishing to offend anyone, I do think this topic is more likely to advance if we minimize both the terminology and the reliance on prior Writ, since too many words and too little of substance have been written on the problem, and simplicity is so much more likely to keep our eyes on the ball. The "arguments" referred to below are a case in point.)
DC: "The argument that [1] nonlinear physical systems are in some way holistic/non separable... and... [2] quantum mechanical systems" 
[1] is (in my opinion) empty hand-waving (all the specifics of feeling slip right out of "nonlinearity" -- ubiquitous in the world) and as for QM  [2]: the explanatory gaps of one field are not filled by the explanatory gaps of another!






2009-04-26
The 'Explanatory Gap'
Reply to David Chalk


DC: "I'd agree with Chalmers (post #2 above) that a poll would be interesting.  I would guess for example that Derek falls into category #1 and Stevan falls into category #3."

No. The closest for me, I think, would be #4 although it is really too categorical for my liking. Basically I am an agnostic in this field.

I am not sure by the way what purpose a poll would serve. Democracy is a great thing but not for solving philosophical problems.

DA



2009-04-26
The 'Explanatory Gap'
DC:
(1) There's no explanatory gap, or one that's fairly easily closable.
(2) There's a deep explanatory gap for now, but we might someday close it.
(3) There's a permanent explanatory gap, but not an ontological gap (so materialism is true).
(4) There's a permanent explanatory gap, and a corresponding ontological gap (so materialism is false).

(3') There's a permanent explanatory gap (because feelings are noncausal),  but not an ontological gap (because telekinetic dualism is false).

2009-04-29
The 'Explanatory Gap'
Reply to Stevan Harnad
Hi Stevan

For the unlettered outsider like me, what is 'telekinetic dualism' exactly?

DA

2009-04-29
The 'Explanatory Gap'
Reply to Derek Allan
TELEKINETIC DUALISM: MIND OVER MATTER
DA: "For the unlettered outsider like me, what is 'telekinetic dualism' exactly?"
"Telekinesis" (or "psychokinesis") is often also called "mind over matter": It's spoon-bending by Uri Geller. Not just "action at a distance" as in electromagnetism or gravity, but action at a distance caused by mental power alone. It's what psychics do. Spooky stuff.

I (and I assume you) don't believe a word of it.

But even when I bend a spoon with my hands, rather than at a distance, it feels as if it is my mind that is causing the bending, by causing my hands to bend the spoon.

The alternative is that it is electrochemical activities in the motor regions of my cerebral cortex that are causing my hands to bend the spoon, and that my mentally willing it had nothing to do with it -- except that it was quite closely correlated with it. 

(How closely correlated is still a matter for some debate, as, for example, the work of Benjamin Libet might possibly be showing: It could be that an unfelt cerebral event very slightly precedes my feeling of willing my hand to move.)

So telekinetic dualism would be true if there really existed a mental force, rather like the other 4 fundamental forces of nature -- electromagnetism, gravitation, strong subatomic; weak subatomic (if there are indeed 4, for they may be destined to be unified by some grand theory one day) -- and that 5th force, not the other 4, were the cause of the movement of my arm.

But there is no 5th force. The electrochemical/mechanical brain state preceding my movement, and triggering it, explains the cause of my movement as fully as its trivial counterpart does in a simple robot (except of course that the brain is much more complicated and capable); and whether the trigger point in the causal chain coincides with the moment I feel I am initiating the movement or precedes it slightly does not matter a whit: Unless telekinetic dualism is true, my feeling that I am doing it because I feel like like it in reality plays no causal role in my movement (even though the feeling is real enough).

And that is the mind/body problem. Telekinetic dualism would have been the solution -- if it had been true. But it isn't. There is no mental force, even though it feels like it: It's all matter over matter. But we cannot explain why or how, because there is no causal room. That's the explanatory gap.

-- SH

2009-04-29
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "But even when I bend a spoon with my hands, rather than at a distance, it feels as if it is my mind that is causing the bending, by causing my hands to bend the spoon."

I have actually never thought about this, but now that I do, I can't say I have any strong views either way.

In general, I doubt very much that introspection can tell us anything important about the mind. It's a bit like asking a fish to say what the outside of its aquarium looks like.

DA


2009-04-29
The 'Explanatory Gap'
Reply to Stevan Harnad

Hi Stevan, I appreciate your responses, esp. correlates vs. analogs in psychophysics.  Because I'm not 'pro' status, my posts may lag by 24 hours or more.  I'd unfortunately already submitted my post when you responded regarding correlates vs. analogs in psychophysics.

Please correct me if I'm wrong, but I believe the concept of 'telekinesis' is abhorrent because it suggests there are non physical phenomena which influence the comings and goings of material things.  Whether we're talking about classical or quantum phenomenon, such things as momentum, position and fields are measurable in some way.  To suggest these things might be influenced by 'feeling' seems ludicrous.  However, suggesting that momentum, position or fields can create phenomena that are not measurable by measuring the momentum, position and field is just as serious a problem as suggesting said phenomena influences those measurements.

If you don't want to accept telekinesis, then why accept the corollary which is that objectively measureable properties produce phenomena that are not objectively measurable?  Call this "materialkenesis" or m-kinesis for short if you'd like.  If you can't measure it, don't accept it.  Unfortunately, that brings us back to #1, and neither you nor I are happy about that.

Earlier you suggested that experience/qualia/feeling are measurable by the subject and reportable, but are not causal or perhaps are epiphenomenal.  Could you be so kind as to clarify this?  I'll explain what I think you mean and what I think you don't mean.  By so doing, hopefully you can clear up my confusion.

I believe this gets back to telekinesis.  You don't want experience to influence anything physical.  You don't want there to be an unmeasurable influence on any material comings and goings.  As an example, we might consider a computer being used to control some process such as the launching of a rocket.  One might say the computer has a causal influence over this process, albeit an epiphenomenal one.  Computers are made up of electric circuits, so the computer's (epiphenomenal) causal influence over the launch is dependent on those circuits.  Similarly, the circuits are made up of bits of wire, capacitors, resistors and transistors that are integrated onto a chip.  So the launch is caused by the computer is caused by the circuit is caused by the chips is caused by the transistors is caused by the molecules/atoms making up the transistors.  One might take the position that everything above the molecular level is epiphenomenal, and certainly philosophers have suggested exactly this.  But I don't think anyone is suggesting that computers, circuits or transistors are not causal.  They are all part of a causal chain from atomic and molecular interactions to rocket launch.  To conclude, I believe you're suggesting that experience is not part of that causal chain.  Experience/qualia/feeling can not play a part in any way in this causal chain.  What I don't think you're suggesting is that feelings are epiphenomenal in the same sense as the computer's causal influence is epiphenomenal on the circuits and transistors since this would put feelings into the causal chain of events.

If this is backwards, my apologies.  Now, let's suggest that the experience of the color red can be reliably measured by a person.  In contrast, a digital camera can take light and convert it to a digital pattern which can be reconverted to wavelength using just three pixels on a computer screen.  The intensity we observe from each pixel is interpreted and converted to color inside the brain.  I doubt anyone would say that the experience of color exists at any step of the process between recording the color red using the camera and the reproducing of the color at a computer screen.  However, let's say we had a device which could reliably measure the experience of red.  A human is just such a device if experience reliably correlates to function/behavior.  The measurement is entirely internal, but let's assume for now that this internal measurement is reliable.

Now, if this internal measurement is reliable, then let's assume we can similarly produce this experience computationally.  Let's assume our computer's transistors can produce this reliable correlation and report dutifully the experience has been accomplished.  If this is possible, then that computer is no different in principal from a multitester.  It has physically measured the phenomenon in question and produced a physical report.  This much is m-kinesis. 

If the measurement of the experience is reliable, then that measurement can be (must be) converted to a physical signal so that it is reportable, else it is not reliable.  So if the measurement of experience is reliably reported, then something can be done with that signal.  The signal can be interjected into a causal chain as suggested by the rocket launching example above.  We can have an if/then statement in our computer which says, If Xperience = RED then "SCRUB LAUNCH".  In this way, qualia/experience/feeling is interjected into the causal chain. 

Unless I've screwed up somewhere, which is entirely possible, the bottom line is that experience/feeling can be a part of the causal chain if it is internally measurable (subjectively measurable) and as long as that measurement is reliable. 

One might still claim this influence is epiphenomenal as I've defined epiphenomenal above using the rocket launch example.  If it is epiphenomenal, then we have a slightly different problem to resolve.  The thing about this measuring instrument is that it can essentially measure the entire phase space of some portion of a system, and for it to do this, the system must be non separable.  Computers are separable however.  We can explain everything a computer does by examining the function of each transistor and circuit.  The experience for a computer  therefore is merely functing.  Experience can not be proven to reliably correlate inside a computer, and in fact, experience is never needed to explain anything a computer does. 


2009-04-29
The 'Explanatory Gap'
Reply to Stevan Harnad

SH: “For this reason, brain "analogs" too are just systematic correlates insofar as felt quality is concerned. I may have (1) a neuron in my brain whose intensity (or frequency) of firing is in direct proportion to (2) the intensity of an external stimulus (say, the amplitude of a sinusoid at 440 hz). In addition, there is the usual log-linear psychophysical relationship between the stimulus intensity (2)  and (3) my intensity ratings. The stimulus intensity (2)  and the neuronal intensity (1)  are clearly in an analog relationship. So are the stimulus intensity (2) and my intensity ratings (3) (as rated on a 1-10 scale, say). And so are the neuronal intensity (1) and my intensity ratings (3). But you could get all three of those measurements, hence all three of those correlations, out of an unfeeling robot. (I could build one already today.) How does (4) the actual feeling of the intensity figure in all this?”

 

I’ll start with your statement above to suggest that there is a crucial factor that is almost always overlooked in arguments of this kind. The critical point is that while you could build a robot that has all three of those correlations, you are NOT able to build a robot that has another capacity --- one that I claim is essential for the existence of your feeling the intensity of a 440 hz sound. This missing capacity is the internal representation of the sound and its location within a space-time analog of the volumetric world from a privileged egocentric perspective. My central claim is that you cannot be conscious without a brain representation of something somewhere in your egocentric space. In my theoretical model of the cognitive brain, this analog representation is the minimal condition for consciousness to exist.  Why this should be the case, I do not know. But while I can’t explain the sheer existence of consciousness, I believe that I can explain many important aspects of the content of consciousness (feelings) by the neuronal structure and dynamics of putative brain mechanisms that I call the retinoid system.

 

See (http://eprints.assc.caltech.edu/355/)

 

As for Shepard’s rotation experiments, I would say that the rotation and matching “functings” are unconscious processes. It isn’t until the result of the matching test is signaled as a neuronal event that is projected into egocentric space that it is consciously sensed. Then it is linked to an appropriate lexical pattern (inner speech), and reported verbally or by key press. Not all analog representations are felt, but all felt representations are analogs of something somewhere in our egocentric space.

 

SH: (3') “There's a permanent explanatory gap (because feelings are noncausal), but not an ontological gap (because telekinetic dualism is false)

 

I think the distinction between the sheer existence of feelings and the particular contents of feelings must be clarified before we address the question of an explanatory gap. I argue that while the existence of consciousness (feelings) may be beyond our ability to explain, the contents of consciousness can be explained. Moreover, since particular feelings are patterns of neuronal excitation in our retinoid representation of egocentric space that are projected upstream for higher cognitive processing, I claim that feelings must be causal. These matters deserve some extended discussion.

 

Stevan, earlier in this thread I asked you this question: “Suppose the functing of a particular kind of brain mechanism was theoretically specified, and on the basis of its putative operating principles, one predicted the occurrence of a particular kind of feeling never experienced before. Suppose the prediction was successful and repeatable. Would you then be inclined to accept the idea that the functing of the specified brain mechanism was the biophysical aspect of the predicted feeling?”

What is your answer?

 

 

 

 

 

  


2009-04-29
The 'Explanatory Gap'
Reply to Arnold Trehub
ON PREDICTING WHAT IT FEELS LIKE TO BE A BAT...
AT: "Not all analog representations are felt, but all felt representations are analogs of something somewhere in our egocentric space"
Arnold, I am afraid you have given up the game here! The M/B problem and the explanatory gap are about explaining how/why functions are felt, rather than just functed. You work on analog functions, which is fine -- valuable, informative. But it is how/why (some) analog functions are felt that is at issue here, not how/why they are analog, or functional.
AT: "while the existence of consciousness (feelings) may be beyond our ability to explain, the contents of consciousness can be explained"
What can be explained is the functionality of analog functions; and what we have (as a gift) is their correlation with feelings. How and why feelings are there and correlated with functions is completely untouched. That is the explanatory gap.
AT: “Suppose the functing of a particular kind of brain mechanism was theoretically specified, and on the basis of its putative operating principles, one predicted the occurrence of a particular kind of feeling never experienced before. Suppose the prediction was successful and repeatable. Would you then be inclined to accept the idea that the functing of the specified brain mechanism was the biophysical aspect of the predicted feeling?”
Not inclined in the least! 


You are simply re-affirming the feeling/functing correlation, not explaining. Sonar perception (of a bat) feels like something. Humans don't feel sonar. If someone genetically engineered a sonar perception mechanism that could be added to the human brain and it produced not only bat-like functional capacities, but felt perception, this would of course not prove anything at all (insofar as the feeling/function problem is concerned), even if all went exactly as "predicted." No one but a bat knows what it feels like to be a bat today *although we do have a very rough idea from our other sense-modalities, as all the senses resemble one another in a very general sense: guessing or describing what it feels like to be a bat, for us, is rather like a congenitally blind person guessing what it feels like to see.)


The very same is true of a brand-new, artificially engineered sensory modality: Even if it works, and produces both functioning and feeling, correlated, as predicted, it still does not explain in the slightest how/why it is felt. It simply migrates the mystery to a brand-new sensory modality.


And the fact that it uses analog function does not illuminate the f/f problem by even a single candela (or jnd), alas!


-- SH







2009-05-01
The 'Explanatory Gap'
Reply to Jamie Wallace
JS: "what are feelings?"
          SH:  "Everyone who feels knows that, even if they effect not to."

I do not deny that the words "consciousness" and "feelings" have robust and important places in our common language.  I asked that question because I wonder whether your usage of the term reflects the common usage.  I think that, when you talk about feelings, you must be talking about something else, but I do not know what. (And replacing the word "feelings" with more philosophically sophisticated terms, like "qualia" or "raw feels" or even, "contents of consciousness," will not help here, because I still want to know how you understand these terms.)

When I asked you how you know feelings existed, you said you pinch yourself occasionally.  Yet, If you did ever have occasion to doubt that feelings existed, I do not see how a pinch could have changed your mind.  To put it another way:  I do not see how this answer is supposed to help me understand what feelings are or how it is that we know they exist.

I asked how you know that feelings don't cause anything.  You answered:  "I know they feel as if they cause things (e.g., when I move my finger because I feel like it). But I notice that there are 4 fundamental forces in the universe, and that they cover my brain's every move, with no remaining degrees of freedom."

You may be right about the four fundamental forces accounting for all brain activity, but I do not see why we should think feelings can't be manifestations of these forces.  Thus, to rephrase my question, how do you know that feelings are not as causally efficacious as anything else in nature?

I asked how you know there is a correlation between brain states and feelings.  You said, "as my anxiety level goes up, my GSR goes up, and vice versa."

That is an example of a correlation, and as such does not answer my question.  How do you know that your anxiety level goes up when your GSR goes up?

I'm looking for some explanation of your use of the term "know" here:  how it is that the rise in anxiety can be known, and how the rise in GSR can be known, and how the correlation between the two can be known.  The issue here is why you think that you know about your feelings in an indubitable and inexplicable way.

I have a slightly different interpretation of Wittgenstein on that point about private languages you mentioned earlier, and it is relevant here.  It is not only that a wholly private language lacks the possibility of error correction; it is that the very notion of error makes no sense here.  There is no criterion for correctness, and so there is no sense in which a word can be used wrong (and thus correctly) to refer to private sensations.  The implication is that you can (in theory) use the word "feeling" to refer to something which is absolutely private, but you cannot claim that this usage is correct, and so it cannot indicate knowledge of such sensations.  So, when you say, "I know with absolute certainty what red is, because it is my feeling alone and I experience it directly" (or something similar), we should conclude that you aren't saying anything.  As W. says, "a nothing would serve just as well as a something about which nothing could be said" (Philosophical Investigations, section 304).

I stated that I cannot know I have a toothache without observing my bodily states, and that I could not know I had a toothache without having a tooth.  You replied:  "No? What about referred pain, or phantom limb pain, or hysterical pain, or hallucinated pain?"

It is a necessary condition of knowing that X is the case that X is true.  If one said, "my foot hurts," but did not have a foot, we would not say they knew that their foot hurt.  Rather, we would say they thought or believed their foot hurt, or perhaps that they felt as though they had a sore foot.  To know that you have a toothache, you must have a tooth; as I said, unless you wish to define "toothache" to mean something other than an aching tooth.

Perhaps you only mean to say that you can know you feel like you have a toothache without observing your body in any way.  More generally, you want to say that no bodily observations could produce knowledge of feelings, and no knowledge of feelings could produce knowledge of bodily states.  In your view, feelings do not inform us about our bodies at all--for, if they so informed us, then they would play a causal role in our ability to learn about and function within the world.  And if observations of our bodies could inform us of our feelings, then there would be no "hard problem." 

This is a form of dualism.  Whatever feelings are and whatever functions are, information about one cannot be gained from the other.  You prefer to call your position "epiphenomenalism," because you wish to maintain some notion of causal dependence between bodily states and feelings, even if that dependence is only one-way.  But such a causal dependence is unknowable--a something about which nothing could be said. 

To see the dualism more clearly, consider this:  When you ask "why are some functions felt?," what is it that you suppose is feeling the functions?  What sort of entity can feel?  I do not see how you can answer this question without explicitly embracing dualism; and if you do not answer it, then your usage of the term "feel" becomes highly suspect.  There is no practical difference between epiphenomenalism and dualism that I can see.  Epiphenomenalism is dualism without an explicit ontological commitment, and that is an empty distinction when we consider that any explicit ontological commitment would be to specify another something about which nothing could be said. 

Your position cannot be established a posteriori.  Appeals to common knowledge and ostensive definitions can only beg the question.  You do indicate something like Chalmers' conceivability argument when you talk about robots, and that is an a priori argument; however, I am not convinced by it (see this discussion for details.)

2009-05-01
The 'Explanatory Gap'
Reply to Stevan Harnad
The Existence of Consciousness and the Content of Consciousness...

I think we now may be getting to the distinction that needs to be hammered out.

AT: "Not all analog representations are felt, but all felt representations are analogs of something somewhere in egocentric space."

SH: "Arnold, I am afraid you have given up the game here! The M/B problem and the explanatory gap are about explaining how/why functions are felt, rather than just functed. You work on analog functions, which is fine -- valuable, informative. But it is how/why (some) functions are felt that is at issue here, not how/why they are analog, or functional."

Let's take this a step at a time:

1. If we ask how/why some functions are felt, we seem to grant that some functions are not felt, and we can ask if there is a systematic biophysical  difference between felt functions and unfelt functions. 

2. We can also ask why any felt function is felt.

It seems to me that question 2 is equivalent to asking why anything like feeling (consciousness) exists at all. Would you agree, Stevan?
If you don't agree, it would be very important to know why you disagree.

AT 






2009-05-01
The 'Explanatory Gap'
Reply to Jamie Wallace
I am equally curious as to how professionals variously constitute "explanation" re this question. Materialist/determinist folks appear to require explanations to express absolute conformity to strict logical templates of cause-effect, etc. Others, myself included, consider explanation to be the most fruitful, if not normative, when it partially and/or putatively explains so as to assist further avenues of pursuit. While such explanations are hardly formal, often imprecise and rarely rigorous, nonetheless they are "partial" explanations whose veracity only awaits other kinds of validation. I might add that this distinction is a profound source of miscomprehension (animosity?) as between theorists and those adhering to a very strict "experimental" approach to the "scientific method". I also note as an aside that it does not seem to me that philosophy has adequately distanced itself from the science purists (as I term them).

Now if we restrict attention solely to the "consciousness" aspect of this question, I am of the school holding that sleep, dreams and consciousness are so essentially related as to require answers to any two to obtain any third. Ergo, a putative explanation may hold that to the extent the common modalities of identificative and projective mechanisms run in common as between REM and consciousness of the awake state, we would look to hypnogogic mechanisms of explanation. While there aren't necessarily a lot of those to go aropund, the theoretical material is astounding. If, therefore, we can better grasp the role of hypnosis in positive and negative hallucinations, most all of which entail awake consciousness, we will need only verify through more physical/lab approaches--some of which are already available.

I truly believe that sleep and dream research holds the answers--the 'explanation'--for the origin and nature, as also much of the functionality, of consciousness. And once we have a better handle on that, well, the explanatory elements enable generalization to qualia of varying sorts, and so forth. So I guess I allow the presence of a "gap" but presume that substantive progress awaits, though I also am fully open to the view that some 'gap' will always remain, to the extent that "emergent" qualities are likely to remain without satisfactory explanations long into the future, though Batterman (Devil in the Details) has an approach that seems to me to offer promise even here.

2009-05-01
The 'Explanatory Gap'
Reply to Jamie Wallace

Arnold,

In reading your paper on the retinoid system, http://eprints.assc.caltech.edu/355/:

SPACE, SELF, AND THE THEATER OF CONSCIOUSNESS

"A particularly striking finding of hemispatial neglect in patients with brain lesions involving the right temporal-parietal-occipital junction (Bisiach and Luzzati, 1978) also lends support to the retinoid model of spatial representation. In this study, patients were asked to imagine that they were standing in the main square in Milan which was a very familiar setting for them. They were first instructed to imagine themselves facing the cathedral and to describe what they could see in their “mind’s eye.” They reported a greater number of details to the right than to the left of their imaginary line of sight, often neglecting prominent features on the left side. When they were asked to perform the same imagery task, facing away from the cathedral, they were able to report previously neglected details within the right half of the imaginal perspective but ignored items in the left half that they had reported just a few moments before."

What's interesting is that recognition of details to the left affirms a principle of retail marketing, namely when retailers design a store, they base it on the principle that when customers enter a store, they tend to walk to the left.  Case in point is my local pharmacy/retail store. When you enter the store, the high profit cosmetic items are immediately to the left with the pharmacy counter in the extreme rear of the store and the necessity food items like milk, eggs, bread etc on the extreme right side of the store.  When my teenage daughter and friends enter the store they immediately go to the cosmetics area where they can easily spend $20 on lip gloss and eyeliner.

Our vision and the left side of our being is wired to the right brain lobe and the right side to the left lobe of course. The left side of our mind is wired for logicality and speech which is our dominant hemisphere for normal experience. When a person (especially a young person) enters a store (or Disneyland), the right side of the brain which seeks experience causes them to go left while the parent may go right to buy milk and bread.



2009-05-02
The 'Explanatory Gap'
Reply to David Chalk

ON MEASURING, FEELING, AND COMMENSURABILITY: (AND MIND THE ONTIC/EPISTEMIC GAP!)

David, I think you have misunderstood a number of things:

(1) The most important is the ontic/epistemic distinction: Distinguish been what there really is (ontic) and what we can know about what there really is (epistemic), e.g., what we can observe or measure. Although it was fashionable for a while (though one wonders how and why!), it will not do to say "I shall assume that what I can observe and measure is all there is and can be." Not if you want to address the question of the explanatory gap, rather than simply beg it! 

(2) Observation and measurement also have to be looked at much more rigorously. In the most natural sense of "observe," only seeing creatures observe. A camera does not "observe," it simply does physical transduction, producing a physical "image" (on the film) which, again, is simply another object that has some properties (which in turn are analogs of some of the properties of the object from which the light entering the camera originated). The seeing person who looks at the image on the film is the one who observes, not the camera. 

The same is true of measurement: A thermometer does not "measure" temperature; people measure temperature. The thermometer itself simply implements a physical interaction, in which its mercury rises to a certain point on the (man-made) scale, which can then be read off by a seeing, observing, measuring human. The user is the one doing the measuring, not the thermometer.

But there is no reason to be quite this rigid: There is not much risk in talking about instruments doing the measurements, rather than the users of the instruments, just as long as we do not read too much into "measuring." Ditto for "observing." In particular, we must on no account make the mistake of treating this instrumental sense of measuring and observing as if it were felt measuring and observing, because then, again, we are simply begging the question of the explanatory gap and the feeling/functing problem.

In the instrumental sense of "measurement," we can say, for example, that unattended temperature sensors in the arctic transmitted their "observations" to computers, which analyzed them and produced a result, which (correctly) predicted global warming and the destruction of the biosphere in N years. And that event would be the same event if humans were already extinct and the arctic sensors and computers were running on auto-pilot. But what would it mean?

(Remember that I have a radically deviant view, not the standard one, on the subject of the relation between feeling and meaning: I think only felt meaning is meaning; without feeling all one has is grounded robotic functing (and semantic interpretability). So even if, after the extinction of humans, the arctic sensors and the computers transmitted their data to robots that then took the requisite steps to avert the global warming and save the biosphere, that would all still just be physical transduction and nothing else -- except, of course, if the robots actually did feel -- but in that case it would be irrelevant that they were robots! They might as well be us; and all the observing and measuring is again being done by feeling creatures, and the feeling/function gap is as unbridged as ever!)

(3) Your third equivocation in what follows below, is in the weasel-word "experience" -- which can mean felt experience, as in our case, or, used much more loosely and instrumentally (as with "observing" and "measuring") it can merely mean an event in which there was again some sort of physical interaction. Whether the event was one billiard ball hitting another, or a camera snapping a photo after all life is gone, or a computer receiving the bits and applying an algorithm to them -- these are all pretty much of a muchness. There's no "experience" going on there, because of course it's only really an "experience" -- rather than just an event or state with certain functional properties -- if it is felt (by someone/something).

And that (and only that) is what this discussion is all about, and has been, unswervingly, all along (for those who grasp what the explanatory problem at issue is).

DC: "'telekinesis' is abhorrent because it suggests there are nonphysical phenomena which influence the comings and goings of material things.  

Ordinary ("paranormal/psychic") telekinesis is not "abhorrent," it is simply false, in that all evidence contradicts it. All seemingly telekinetic effects keep turning out to be either due to chance or to cheating.

And as for (what I've called) "telekinetic dualism" -- that too is not abhorrent. It is perfectly natural, indeed universal, to believe and feel that our feelings matter, and that most of what we do, we do because we feel like doing it, and not just because functing is going on, of which our feelings are merely correlates -- correlates of which we do not know the causes, and, even more important, correlates which themselves have no effects of their own, and we cannot explain how and why they are there at all. (That, yet again. is the f/f problem and the explanatory gap.)

DC: "To suggest...momentum, position and fields... might be influenced by 'feeling' seems ludicrous." 

It is not ludicrous; it is simply false.

DC: "However, suggesting that momentum, position or fields can create phenomena that are not measurable by measuring the momentum, position and field is just as serious a problem as suggesting said phenomena influences those measurements"

How did we get into "measurability"? We can measure momentum today that was too minute to measure yesterday. Maybe there's still momentum we can't measure, or don't even know about. This is the ontic/epistemic error: What there is (and isn't) in the world owes nothing, absolutely nothing, to what human senses and instruments can or cannot "measure."

Moreover, the f/f problem and the explanatory gap have nothing to do with the limits of human senses or measuring instruments. They have to do with the fact that we feel, yet we cannot explain how or why, because all evidence is that feelings, though they are there alright, have no independent causal power. They are just inexplicable correlates of the things that really do have causal power (functing). Hence the mystery about why everything is not all just unfelt functing: Why are some functions felt?

DC: "If you don't want to accept telekinesis, then why accept the corollary which is that objectively measureable properties produce phenomena that are not objectively measurable?"  

I have no problems whatsoever with the very real possibility that measurable properties may also have unmeasurable effects. The problem is that that has absolutely nothing to do with the problem of explaining how and why some functions are felt. It is not immeasurable effects of functing that are the problem; it is the fact that some functing is felt. (And although feeling is not, strictly speaking "measurable," it is certainly observable -- indeed, it is the only thing that is unproblematically observable! (It is no wonder that -- in struggling with their own "explanatory gap" -- philosophers of quantum mechanics have made something of a cult out of human observation, as being the mysterious cause of the "collapse of the wave packet" that separates our punctate world from the continuously superimposed smear it would be if there were no people to read off the outcome of a geiger-counter experiment! But, alas, this is just piling mystery atop mystery...)

DC: "If you can't measure it, don't accept it."

There's the barefoot operationalism, again. This may be useful advice to an experimental physicist -- if not to a superstring theorist -- because all they deal with is functing anyway, whether measurable or unmeasurable. But it is just question-begging if you are trying to explain how/why organisms feel rather than just funct.

DC: "Earlier you suggested that experience/qualia/feeling are measurable by the subject and reportable, but are not causal or perhaps are epiphenomenal.  Could you...clarify this?"

(First, why the needless synonyms "experience/qualia/feeling" when feeling covers them all and is problem enough?)

Second, I did not say feelings are measurable. (I think physical properties and feelings are incommensurable, and that measurement itself is physical, functional.) I said our feelings correlate with functing. We say (and feel) "ouch" when our skin is injured, not when it is stroked, or randomly; we say (and feel) a sound is louder when an acoustic amplitude increases, not when it decreases (or randomly). So the correlation is definitely there.

But this does not help explain why (or how) tissue damage and acoustic amplitude change is felt, rather than functed. If our neurons simply fired faster when we were hurt, or when a sound got louder, and caused our muscles to act accordingly, but we did not feel, then we'd still have the psychophysical correlation (stimulus/response) -- including, if you like, JND by JND psychophysical scaling -- but no correlated feeling. So the question naturally arises: what's the point of the feeling?

I also don't think I am measuring anything when I feel, or report my feeling. I am simply feeling. When I say "more" or "less," I am saying this feels like more and that feels like less. The psychophysicist is doing the measuring (not I): He is measuring what I do (R) and comparing it to the stimulus (S) and noting that they are tightly correlated. I am just saying how it feels. As I said in my reply to Arnold Trehub: apart from the S/R correlation, there is not a separate "sentometer" to measure the feeling itself; it's not even clear what "measuring a feeling" would mean. Nor, as I said, am *I* "measuring" what I'm feeling, in feeling it, and acting upon it. I'm just feeling it, and acting on it. And there is a tight correlation between what happens outside me (S), what I feel, and what I do (R). There better be, otherwise I would come from a long line of extinct ancestors. But the co-measurement is only between S and R, which are both functing and unproblematic. It feels as if I am drawing on feelings in order to generate my R, but how I do that is rather too problematic to be called "co-measurement" in any non-question-begging sense of measurement. So although the feeling is correlated with S and R, they are not commensurable, because the feeling is neither being measured, nor is it itself a measure, or measurement.

You also seem to be misunderstanding "epiphenomenal": Epiphenomenal does not just mean "unimportant or unmeasurable side-effects." It means (1) an effect that is uncaused, or (2) an effect that has no effects. I am a "materialist" in that I am sure enough that feelings are caused by the brain, somehow (i.e., they are not uncaused effects (1)); I simply point out that we have no idea how feelings are caused by the brain (and we never will). But the real puzzle is not that: the real puzzle is why feelings are caused by the brain, since feelings themselves have no effects (2). They are functional danglers, which means that they are gaps in any causal explanation.

There is one and only one epiphenomenon (unless QM has a few more of its own), and that is feeling: Caused (inexplicably) by the brain, feelings themselves (even more inexplicably) cause nothing -- even though it feels as if they do.

DC: "You don't want experience to influence anything physical.  You don't want there to be an unmeasurable influence on any material comings and goings."  

First, this has nothing to do with what I do or don't want!

Second, rather than equivocate on "experience," can we please stick to calling it feeling!

Feelings have no independent causal power, not because I don't want them to, but because telekinetic dualism is false: there is no evidence for feelings having any causal power, and endless evidence against it.

And whereas there can certainly be unmeasurable effects, one cannot invoke them by way of an explanation of something without evidence. Besides, the problem with feeling has nothing to do with measurability; it's their very existence that is the problem. And even if they were completely uncorrelated with anything else (the way our moods sometimes are), they would still defy causal explanation.

DC: "As an example, we might consider a computer being used to control some process such as the launching of a rocket.  One might say the computer has a causal influence over this process, albeit an epiphenomenal one."  

Why on earth would you want to say the influence was epiphenomenal? This is a perfectly garden-variety example of causal influence!

DC: "One might take the position that everything above the molecular level is epiphenomenal, and certainly philosophers have suggested exactly this."  

Philosophers say the strangest things. If everything about the molecular level is "epiphenomenal," we have lost the meaning of "epiphenomenon" altogether. 

And that's just fine. I get not an epsilon more leverage on the inexplicability of how and why some functions are felt if I add that they are "epiphenomenal"!

DC: "computers, circuits or transistors are... all part of a causal chain from atomic and molecular interactions to rocket launch."  

Indeed they are. No causal gaps there. It's with feelings that you get the causal gap that lies at the heart of the explanatory gap.

DC: "you're suggesting that experience is not part of that causal chain.  Experience/qualia/feeling can not play a part in any way in this causal chain."  

First, can we just stick with the one term "feeling"? The proliferation of synonyms just creates a distraction, and what we need is focus, and to eliminate everything that is irrelevant.

The evidence (not I) says that feelings have no independent power to cause anything. All the causal chains on which they piggy-back mysteriously are carried entirely by (unproblematic) functing.

DC: "What I don't think you're suggesting is that feelings are epiphenomenal in the same sense as the computer's causal influence is epiphenomenal" 

(1) I don't for a minute think a computer's causal influence is epiphenomenal. It's causal influence is causal!

(2) I would suggest forgetting about "epiphenomena" and just sticking with doing, causing and feeling.

(3) All evidence is that feelings do not cause anything, even though they feel as if they do. All the causation is being done by the functing, on which the correlated feeling piggy-backs inexplicably.

(4) The inability to explain feeling causally is the explanatory gap. 

DC: "let's suggest that the experience of the color red can be reliably measured by a person."  

Alas we are back into ambiguity and equivocation.

It feels like something to see red.

The feeling is correlated with wave length (and brightness and luminosity), as psychophysics has confirmed.

Persons don't measure. They feel, and respond (R). Psychophysicists measure (S and R).

S and R are reliably correlated, and since R is based on feelings, we can say feelings are reliably correlated with S too (even though, strictly speaking, S and R are commensurable, but neither is commensurable with feelings).

The human subject, however, is not measuring, but feeling, and doing.

DC: "a digital camera can take light and convert it to a digital pattern which can be reconverted to wavelength using just three pixels on a computer screen.  The intensity we observe from each pixel is interpreted and converted to color inside the brain.  I doubt anyone would say that the experience of color exists at any step of the process between recording the color red using the camera and the reproducing of the color at a computer screen."  

No, the feeling (sic) of seeing color occurs in the brain of the feeling subject. Not before or after in the causal (or temporal) chain. 

(And why the computer? Let the stimulus be color. No need for it to be computer-generated color. If the digital-camera/computer is used instead as an analogy for the seeing subject, rather than the stimulus, the answer is that there is no feeling in the camera or the computer.)

DC: "let's say we had a device which could reliably measure the experience of red.  A human is just such a device if experience reliably correlates to function/behavior."

David, with this "assumption" you have effectively begged the question and given up (or rather smuggled in) the ghost (in the machine): Until further notice, the only devices that have experiences (feeling) to "measure" are biological organisms. If you declare some other device to feel by fiat, you're headed toward panpsychism (everything and every part and combination of everything feels) which is not only arbitrary and as improbable as telekinesis, but is probably incoherent too.

No device can measure a feeling (sic); it can only measure a functional correlate of a feeling. And a human subject feels the feeling; he does not measure it.

DC: "Now, if this internal measurement is reliable, then let's assume we can similarly produce this experience computationally."  

You've lost me. There is no internal measurement going on, just feeling. And it is "reliable" inasmuch as it correlates with S and R. 

It is of course the easiest thing in the world to replace a human -- feeling, say, sound intensity -- by a computer, transducing sound intensity, in such a way as to reproduce the human S/R function.

Trouble is that in so doing you have not solved the f/f problem but simply begged the question -- which is, let me remind you: How and why are we not also like that unfeeling device, transducing the input, producing a perfect S/R function, but feeling nothing whatsoever in the process?

DC: "Let's assume our computer's transistors can produce this reliable correlation and report dutifully the experience has been accomplished. If this is possible, then that computer... has physically measured the phenomenon in question and produced a physical report."  

You seem to think that the f/f problem is getting a device to produce a reliable psychophysical detection (S/R) function: It's not. The problem is to explain how and why we are not just devices that produce a psychophysical detection (S/R) function: how and why we feel whilst we funct.

(And this is not about measurement, but about explaining the causal role of feeling in human functing.)

DC: "If the measurement of the experience is reliable, then that measurement can be (must be) converted to a physical signal so that it is reportable, else it is not reliable.  So if the measurement of experience is reliably reported, then something can be done with that signal.  The signal can be interjected into a causal chain..."

I'm afraid you have left the real problem long behind as you head off into this measurement operationalism that begs the question at issue, which is not about reliable "measurement" but about felt functing.

DC: "We can have an if/then statement in our computer which says, If Xperience = RED then "SCRUB LAUNCH".  In this way, qualia/experience/feeling is interjected into the causal chain."  

You really think feeling is just a matter of an if/then statement in a computer program? Would a problem with a solution as trivial as that really have survived this long? If the physical substrate of feeling were (mirabile dictu) if/then statements in a computation, there would still be (as with the perpetuum mobile) that niggling little problem about why the if/then statements were felt rather than just functed...

DC: "Unless I've screwed up somewhere, which is entirely possible, the bottom line is that experience/feeling can be a part of the causal chain if it is internally measurable (subjectively measurable) and as long as that measurement is reliable."  

I regret to say that you have indeed screwed up at a number of points, big time! I've tried to point them out. They begin with your operationalism about "measurability," they continue with the equivocation on "experience" (felt experience? how/why felt, then, rather than just functed?), and your (arbitrary) equation of feeling with "measuring,"

DC: "One might still claim this influence is epiphenomenal as I've defined epiphenomenal above using the rocket launch example."  

As you've defined epiphenomenal, epiphenomenality is so common that it casts no light at all on the special case of the causal status of feeling.

DC: "We can explain everything a computer does by examining the function of each transistor and circuit.  The experience for a computer  therefore is merely functing.

Here the equivocal word "experience" has even led you to saying something that is transparently false or absurd if stated in unequivocal language: "The feeling for a computer is merely function" i.e., the computer does not feel, it merely functs. (And our problem -- remember? -- was not computers, but *us*, 'cause we really do feel, rather than just funct, like the computer...

DC:  "Experience can not be proven to reliably correlate inside a computer, and in fact, experience is never needed to explain anything a computer does."  

For the simple reason that (replacing the weasel-word "experience") the computer does not feel. (Hence we are not just computers, or like computers in that crucial respect.)


2009-05-02
The 'Explanatory Gap'
Reply to Arnold Trehub

AT: 1. If we ask how/why some functions are felt, we seem to grant that some functions are not felt, and we can ask if there is a systematic biophysical  difference between felt functions and unfelt functions. 
We had better grant that some functions are felt and some are not felt (since it's true!): My toothache is felt; my thermoregulation is not (although I can feel hot); a furnace's thermoregulation is unfelt, and the furnace does not feel hot (or anything).


We can certainly look for biophysical differences between my felt and unfelt functions; but just as the functional correlates of my feelings will not tell you how or why I feel, the functional correlates of felt and unfelt functions won't tell you either. (And the reason is that there simply isn't the causal room for feelings to have any effects at all (independent of their correlated functions), hence there isn't any room for a causal explanation of how and why we feel: the correlated functions tell all there is to tell.
AT: 2. We can also ask why any felt function is felt. -- It seems to me that question 2 is equivalent to asking why anything like feeling (consciousness) exists at all. Would you agree, Stevan?
Yes, which is why I've reformulated the mind/body problem as the feeling/function problem: How and why are some functions felt?

About the "how" -- i.e., how are feelings generated? -- I don't doubt for a minute that the cause is the brain. What I doubt is that we can explain how the brain generates the feelings, rather than just the correlated functions. So this is not about whether materialism is true. (Of course it is.) It is about whether material (functional) explanation is complete: No it isn't. There's an explanatory gap, insofar as the (fact of) feeling is concerned.

But the harder question is the "why." The "why" is not teleological, it is functional, and causal: In a sense, the only satisfactory answer to a functional question -- why does this device work this way? what functional role does property X play? -- is a functional answer. But if we ask a functional question about feeling -- why does this device feel? what functional role does the fact that it feels play? -- we draw a blank, because feelings have no independent functional role. All the functionality is accounted for by the functional correlates of feelings! That's why "Why are some functions felt rather than just functed?" is the core question. And since a satisfying answer could only be a causal/functional one -- and there is simply no causal room for such an answer (given that telekinetic dualism is false), we are stuck with an explanatory gap.
 
(I should have added in my earlier reply, Arnold, that the object is not to predict what we feel, but to explain that we feel (how, why). And that will not be accomplished by analogs, representations, etc.)

-- SH


2009-05-02
The 'Explanatory Gap'

UNTOWARD CONSEQUENCES OF UNCOMPLEMENTED CATEGORIES

JS: "You may be right about the four fundamental forces accounting for all brain activity, but I do not see why we should think feelings can't be manifestations of these forces.  Thus, to rephrase my question, how do you know that feelings are not as causally efficacious as anything else in nature?"

"Manifestations" is a weasel-word!

I'm pretty sure feelings are caused by the usual four FFs (i.e., I'm not a "dualist," for what my beliefs are worth!). 

But I am pretty sure no one has explained how feelings are caused by the usual four FFs. And I'm pretty sure it's impossible to explain how they are caused. As usual, the attempted explanations will turn out to be explanations of doings, and doing capacity (i.e., functing), not feeling.

As for the fact that feelings have no (independent) effects (i.e., apart from the unproblematic direct effects of the same four FFs on which the feelings are piggy-backing causally): I'm as sure of that as I am that telekinetic dualism is false. (For that is what it would take for feelings to have effects.)

 JS: "a correlation... does not answer my question.  How do you know that your anxiety level goes up when your GSR goes up?"

I think I made it clear I was not invoking a cartesian "know" (i.e., certainty) for the correlations between feeling and functing, just for the fact that I feel. For the correlations I am no surer than I am that, say, night follows day, or that there's an external world...

JS: "why you think that you know about your feelings in an indubitable and inexplicable way."

I am as certain I feel (when I feel) as Descartes was of his cogito -- indeed, it is the cogito, which should have been "sentio ergo sentitur".

And I'm as sure that it's inexplicable as I am that the 4 FFs are all there are, and all that's needed to cause all that's caused. Thus, whereas there's room for feelings as effects, there's no room for them as causes.

And explanation (here) means causal explanation (of how and why feel rather than just funct).

JS: "a slightly different interpretation of Wittgenstein... It is not only that a wholly private language lacks the possibility of error correction; it is that the very notion of error makes no sense here.  [so] you can... use the word "feeling" to refer to something... private, but you cannot claim that this usage is correct, and so it cannot indicate knowledge"

I do interpret Wittgenstein on private language much the same way you do, and that is the problem of error: 

I can't nonarbitrarily name what I'm feeling, even with public correction: I could be calling what it feels like to feel sad "sad" one day and "happy" another day, without the possibility of anyone -- including me -- being any the wiser, as long as my public sayings about feelings were reliably correlated with my public doings and sayings, and it all kept feeling fine to me. 

(I could of course do the same thing if Zombies were possible and "I" were a Zombie: "My" sayings [including my sayings about feelings] and my doings [of which my sayings are of course just a particular case] would be reliably correlated in that case (i.e., if "I" were a Zombie) too, again with the help of public corrective feedback on my doings and sayings -- except that instead of random feelings that just fooled me each time into feeling as if they were familiar recurrent feelings, there would simply be no feelings at all: just the functings that subserve the doing and the saying, which are of course likewise functings.)

In a fundamental sense, all of this is true about every feeling: even with public corrective feedback, there could be a reliable correlation between whenever I'm feeling F and what I refer to publicly as "F", but that correlation could be just as reliable if it were just a correlation with the inclination to call F "F" publicly, plus the feeling that I'm feeling that old familiar F at the time, when in reality I am feeling something randomly different every time. But that's really just about the reliability of public naming (and the correlation plus external feedback takes care of that); it's not about the reliability of the recurrence and identification of the self-same feeling every time it feels as if it's recurring. (It's not for nothing that "feeling" and "seeming" are fully interchangeable in all of this!)

But none of that touches on the fact of (ongoing) feeling itself, about which I have cartesian certainty every time it happens. Not only do I know that I'm feeling, whenever I'm feeling, but even if I'm not feeling what I called F the last time, and instead only feeling-as-if-I'm-feeling what I called F the last time, the fact that I am nevertheless feeling something remains a cartesian certainty there too. 

The best way to see this is to forget about the naming of the feeling; in fact, assume we are talking about a species that has no language. An alligator can have a headache (that feels much like our headache feels) without knowing he has a head, and without calling the feeling anything, nor even remembering ever having felt that feeling before. Whatever the alligator is feeling at the time, it is a certainty that it is feeling, and that it is feeling that (though that poor precartesian alligator may not be feeling that certainty!) And if an alligator were capable of cartesian doubt, he would be incapable of doubting he was feeling a headache (when he was indeed feeling a headache), exactly as I would be incapable of doubting I was feeling a headache -- i.e., doubting that I was feeling whatever I was feeling -- when I was feeling a headache (though I would be perfectly capable of doubting I had a head). (I repeat, the current feeling need not be the same feeling as the feeling I had the last time I felt I had a headache; it could just be déjà vu. This one could feel hot and that one could have felt cold, and I could simply have forgotten that. It doesn't matter. What matters is that I can be sure I am feeling something (or other) now, and that whatever that something (or other) feels like now is what it feels like (and not something else). (Again, the synonymy of "feeling" and "seeming".)

An important further point I made earlier in another posting: If I am to have a well-defined category, it must have both positive and negative instances (i.e., members and nonmembers), and I must have sampled enough of both to be able to pick out what distinguishes them, reliably. Only then can I really "know" (this is not the cartesian know, just a quotidian cognitive capacity to distinguish reliably) what's in the category and what's not in it. 

But the category "feeling" is one of a family of special cases (each of them causing conceptual and philosophical problems) because they are "uncomplemented categories" -- a kind of "poverty of the stimulus" problem arising from the fact that they are based (and can only be based) exclusively on positive instances: In contrast, the category "redness" is perfectly well-complemented: I can sample what it feels like to see red things and non-red things, no problem. But not so with the category "feeling": I can sample what it feels like to feel: I do that every time I feel anything. And I can sample what it feels like to feel X and to feel not-X. So through feeling X and feeling not-X (if there's no evil demon playing random scrambling tricks of the kind I mentioned above on the recurrence of my X and not-X feelings), "X" and "not-X" (or, if you prefer external negation, not-feeling X [when feeling Y instead]) are perfectly well instantiated  and complemented, hence reliably identifiable categories (insofar as ordinary, noncartesian cognition is concerned).

But feeling itself is not; for I can never feel what it feels like to not-feel (as opposed to merely not-feeling X, in virtue of feeling Y instead). All I have is positive evidence for what it feels like to feel.

But I do have evidence. So although the category "feeling" is uncomplemented, hence pathological in some ways, it is nevertheless a category. It leaves me with some indeterminacy about what to call what I'm actually feeling, and about whether or not I've actually felt it before (as it seems). It will also leave me with a lot of puzzles about what "feeling" is (including, notably, the mind/body problem!). But it will still leave no cartesian doubt as to the fact that feeling is indeed going on, when it is: sentitur. (Of course "sentio ergo sum" would be far too strong a conclusion to draw from such evidence: What is this "I" that I supposedly am? (It's almost -- but just almost -- as uncertain as the existence of my head, when all I have to go on, by way of evidence, is my headache.) The best we can say is that it feels as if there is an "I" -- but that's hardly more certain or cartesian than that it feels as if there's an outside world, or a "you". (Life could have been just one isolated, amnesic "ouch" after another, with no "ego" -- yet that would already be enough to create the explanatory gap.)

So sentitur is all we can be certain about, regarding feeling; but that's quite enough to generate the full-blown mind/body (feeling/function) problem.

(All this is by way of my sketching my update on Wittgenstein's private-language argument and problem-of-error, plus a minor tweak of Descartes' cogito.)

JS: "so, when you say, "I know with absolute certainty what red is, because it is my feeling alone and I experience it directly"... we should conclude that you aren't saying anything."  

No, as I've just argued, I cannot have Cartesian certainty about the coupling between my feeling and the world, nor about the recurrent identity of my feeling (what it's called, and whether it's the same thing I felt before under that name) but I can have cartesian certainty about the fact that I am feeling, when I'm feeling (and despite the fact that feeling is an uncomplemented category).

JS: "As W. says, 'a nothing would serve just as well as a something about which nothing could be said' " 

It's a subtle point, but I am not talking here about what can be said; I am talking about about what can be known, with the same certainty as "if P then P" -- and even by an alligator, who cannot think "if P then P" but is just as bound by it...

JS: "Perhaps you only mean to say that you can know you feel like you have a toothache without observing your body in any way."  

Yes: I am talking exclusively about what and when one feels, not about any coupling between the feeling and the world (of bodies, etc.). That has exactly the same scope as the cogito -- indeed it is the cogito, properly put (sentitur).

JS: "In your view, feelings do not inform us about our bodies at all--for, if they so informed us, then they would play a causal role in our ability to learn about and function within the world.  And if observations of our bodies could inform us of our feelings, then there would be no ''hard problem'"

Correct. It is the functing (on which feelings piggy-back, inexplicably) that takes care of our doings and sayings about bodies, including, mysteriously, the correlation between bodily functings and feelings. And there is no cartesian certainty about functings (though of course they are largely reliable, adaptive and veridical); there is certainty only about the fact of ongoing feeling (and about "if P then P").

JS: "This is a form of dualism.  Whatever feelings are and whatever functions are, information about one cannot be gained from the other.  You prefer to call your position "epiphenomenalism," because you wish to maintain some notion of causal dependence between bodily states and feelings, even if that dependence is only one-way.  But such a causal dependence is unknowable--a something about which nothing could be said."  
(1) For what it's worth, I fully believe the brain causes feelings (about as fully as I believe that gravity causes apples to fall); hence I am not a "dualist."

(2) But gravity is one of the four fundamental forces (FFs), hence it calls for no further causal explanation. Feeling is not, hence it does.

(3) And hence I note that although the brain causes feelings, no one has explained how the brain causes feelings.

(4) Worse, no one has explained why the brain causes feelings, given that the four FFs unproblematically cause and constitute all causal function (functing).

(5) So feeling remains a causal/functional dangler: caused (somehow) by the brain, but not itself having any causal power of its own, over and above the functing that it is correlated with, and that accounts causally -- and fully -- for everything we do and say, without the need or room for any extra causal help.

(6) I don't find it particularly useful or informative to call this "epiphenomenalism": it is simply a failure of causal explanation, an "explanatory gap"  (one might as well call it "exceptionalism," equally unilluminatingly) -- but I suppose one is free to call an unsolved and insoluble explanatory problem whatever one likes...
JS: "When you ask "why are some functions felt?," what is it that you suppose is feeling the functions?  What sort of entity can feel?  I do not see how you can answer this question without explicitly embracing dualism; and if you do not answer it, then your usage of the term "feel" becomes highly suspect"
The trouble with uncomplemented categories is that they do raise a host of puzzles: 

(a) I know (cartesianly) that feeling is going on (sentitur).

(b) I have evidence (noncartesian) that there is a world, that I have a body, that others have bodies, and that my feelings (seemings) are very closely correlated with what seems to be going on (doings, functing) in that outside world.

(c) It is part of the nature of feeling that feelings are felt. "Unfelt feelings" are self-contradictory (and meaningless), and the notion of unfelt feelings has given rise to a lot of incoherent hocus-pocus (such as the notion of unconscious thoughts and an unconscious mind -- rather than the [mostly] unfelt functing plus the [minority of] felt functing that is all there really is). 

(d) It also seems to be part of the nature of feeling that a feeler feels the feelings and that it feels-as-if I am the feeler. Insofar as cartesian certainty is concerned, all I can say is that it is certain that feeling is going on (when it is), and that it feels like I am the feeler. In certain disordered states, that's not so clear; but from a sober (but noncartesian) standpoint, it is very likely that my brain causes my feelings, and also causes me, as a continuous identity, feeling and remembering the feelings I've felt. 

(e) No one know how or why the brain causes feelings; the brain (like everything else, including Darwinian evolution) is a functor. It is natural to ask how and why some brain functions are felt, but there is no causal room for a causal answer.

I think I've answered your question as well as one can, and without "explicitly embracing dualism".
JS: "There is no practical difference between epiphenomenalism and dualism that I can see." 
Rather than talking ontics (on which I am a monist), I prefer to talk epistemics (on which I prefer to call an explanatory failure by its proper name).
JS: "Your position cannot be established a posteriori.  Appeals to common knowledge and ostensive definitions can only beg the question.  You do indicate something like Chalmers' conceivability argument when you talk about robots, and that is an a priori argument; however, I am not convinced" 
I take the cogito (or sentitur, rather) to be based on evidence we have from experience (hence a posteriori) -- indeed it is the paradigmatic case of evidence from experience (i.e., feeling). But it is experiential evidence only of the indubitable (incorrigible) fact of experience, not more -- and it is certainly not an explanation of the causes or effects of experience.

No, I have no use whatsoever for "conceivability" arguments. I have no idea whether or not there can be Zombies (i.e., unfeeling Turing-scale robots, indistinguishable in their doing/saying capacities from ourselves), but what I happen to believe is that if a T-scale robot is possible, it will feel. 

Nor is the argument that there is no causal room over and above the 4 FFs an a priori argument. It's contingent on the evidence that there are only the 4 FFs. Telekinetic dualism seems a perfectly conceivable, indeed plausible, alternative. It just happens to be false.

REFERENCES
Harnad, S. (1987) Uncomplemented Categories, or, What is it Like to be a Bachelor? 1987 Presidential Address: Society for Philosophy and Psychologyhttp://cogprints.org/2134/

Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier. http://eprints.ecs.soton.ac.uk/11725/










2009-05-04
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan,
What, in your opinion, might count as a causal explanation of a feeling rather than a mere correlate or an analog of a feeling?




2009-05-04
The 'Explanatory Gap'
Reply to Arnold Trehub

PREDICTING WHAT WE FEEL IS NOT EXPLAINING THAT WE FEEL

AT: What, in your opinion, might count as a causal explanation of a feeling rather than a mere correlate or an analog of a feeling?
Since I do not believe that feeling can be causally explained, you are actually asking me to give you a counterfactual-conditional reply. That's a bit like asking someone who does not believe that one can trisect an angle or build a perpetuum mobile what would count as a trisected angle or a perpetuum mobile! But for trisection we have a proof it's impossible and for perpetual motion we have a law of Nature that entails that it is impossible -- whereas I have neither proof nor law in the case of the causal explanation of feeling. So all I can do is repeat the argument:

If telekinetic dualism were true -- that is, if there were evidence that there could be "mind over matter," with the mental force being a fifth addition to the existing array of four fundamental forces of Nature (electromagentic, gravitational, strong, weak) -- then that would be a causal explanation: Apples fall because of gravitation, and our fingers rise because we will it (we do what we do because we feel like it, not because we are impelled by the other four forces to do it).

But telekinetic dualism is false; all evidence is against it. 

So whereas we certainly cannot (thanks to Descartes) doubt that feelings exist (and whereas feelings are themselves caused [though we have no idea how] by our brains almost as certainly as apples are caused to fall by gravity), we can conclude from the fact that telekinetic dualism is almost certainly false that feelings almost certainly do not themselves have any causal consequences. So we cannot explain (causally) why we feel. All we can explain is what our bodies can do (and how). Feelings piggy-back (somehow) on that functing, without any causal consequences, although they are quite tightly correlated with our functing.

Your own focus, Arnold, is on predicting what we feel (which can in many cases be done, thanks to the tight correlation); but predicting what we feel, no matter how minutely, is in no way explaining that we fail, neither how, now why. (Predicting what we feel simply takes the fact that we feel for granted, thereby begging the question of explaining how or why, and leaving the explanatory gap gaping.)

-- SH

2009-05-04
The 'Explanatory Gap'
Reply to Stevan Harnad
Could someone define the term 'functing' for me please?  Not that I have any intention of using it (a somewhat unhappy coinage, I feel) but I am curious.

DA

2009-05-04
The 'Explanatory Gap'
Reply to Derek Allan

"FUNCTING" IS ALL OF PHYSICAL, BIOLOGICAL AND ENGINEERING CAUSAL DYNAMICS

DC: Could someone define the term 'functing' for me please?
"Functing" (aka, function) is just ordinary causal dynamics, whether in natural inanimate physical systems, biological ones, or artificially engineered ones: everything observed and described in the physical sciences, biological sciences, and engineering. 

Physical, biological and engineering explanation is all causal and functional. (It's sometimes called "functionalism."). And I coined my tongue-in-cheek term "functing" to remind those who are attempting to provide a functional explanation of the causal role of consciousness (feeling) what they are really up against. 

The "mind/body" problem is really just the "feeling/functing" problem. When you put it like that, it becomes transparent that "explanations" such as "the function of pain is to alert the organism to the presence of tissue damage and the need to take evasive action" are circular and hence empty, hence question-begging, because one can always reply: "Yes, but how/why is the function felt, rather than just functed?"


-- SH




2009-05-04
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "So whereas we certainly cannot ... doubt that feelings exist ( [1] and whereas feelings are themselves caused ... by our brains almost as certainly as apples are caused to fall by gravity), [2] we can conclude ... that feelings almost certainly do not themselves have any causal consequences.  [3] So we cannot explain (causally) why we feel. All we can explain is what our bodies can do (and how). Feelings piggy-back (somehow) on that functing, without any causal consequences, although they are quite tightly correlated with our functing."

Stevan, your comments above perplex me. According to [1], you assert that feelings are caused by the brain, and according to [2], feelings have no causal consequences. It seems to me that you are claiming feelings are either (a) non-physical events caused by the brain in a dualistic universe and naturally have no causal consequences for subsequent brain activity, or (b) they are physical events cause by the brain but have no causal consequences for subsequent brain activity. Which case (a or b) do you endorse?


According to your statement [3] above, I take it that in order to explain why we feel we would have to show that feelings have causal consequences. Am I correct in assuming from these statements that you believe we can explain how the brain causes feelings, but we are unable to explain why the brain causes feelings?


...AT







2009-05-05
The 'Explanatory Gap'
Reply to Stevan Harnad

Hi Stevan.  I enjoyed your response. We're actually much closer in our views than you realize.  I've read your papers on the symbol grounding problem (years ago) and agree with your view.  In fact, I've reformulated that concept just slightly (from my engineering perspective) and plan on referencing the symbol grounding problem in a paper I'm working on.  

We talk a different language - philosophy versus engineering, and I think that difference is what may be confusing.  The only real difference we seem to have as far as I can tell, is that you feel there is no fundamental problem with there being a reliable correlation plus no mental causation.  I'm not suggesting there is no reliable correlation, nor am I suggesting there is mental causation.  I'm only suggesting that the two are incompatible.  I don't really care which one gets discarded, but I don't see any way we can have both reliable correlation AND no mental causation.  In contrast, you seem to be claiming that there is both a reliable correlation AND mental causation.

To your points.

SH: (1) The most important is the ontic/epistemic distinction: Distinguish been what there really is (ontic) and what we can know about what there really is (epistemic), e.g., what we can observe or measure. Although it was fashionable for a while (though one wonders how and why!), it will not do to say "I shall assume that what I can observe and measure is all there is and can be." Not if you want to address the question of the explanatory gap, rather than simply beg it! 
 

Agreed.  Just to clarify however when you quote others, "I shall assume that what I can observe and measure is all there is and can be." I agree this begs the question regarding the explanatory gap.  Just to be sure we agree, I would claim there is no way to come up with a measure (or to impliment a physical interaction as you put it) that can discern feeling.  Looking at all the possible methods of physical interaction, I can't think of a single one that might for example, interact with feeling matter while not interacting in an identical mannor with matter that is not feeling.  Putting a chunk of matter into a physical machine that can test for feeling and report it is not possible because there is no physical interaction that can tell us that feeling exists and what the feeling is like.  If there was, there would be no explanatory gap.  I believe this is what you mean by incommensurable, is that correct?  If so, I agree.

SH: (2) Observation and measurement . . .

SH: (3) Your third equivocation . . .

I fully agree with 2 and 3.  I think we're on the same page.  The arctic sensors have no feeling (and no meaning), nor the computer that records them.  We obviously agree there is an explanatory gap.  Physical interactions don't explain feeling, and the arctic sensors are not capable of meaning. 

SH: It is perfectly natural, indeed universal, to believe and feel that our feelings matter, and that most of what we do, we do because we feel like doing it, and not just because functing is going on, of which our feelings are merely correlates -- correlates of which we do not know the causes, and, even more important, correlates which themselves have no effects of their own, and we cannot explain how and why they are there at all. (That, yet again. is the f/f problem and the explanatory gap.)

Yes, I agree there is a correlate.  The issue I have is only that it seems inconsistent to suggest that feelings correlate AND are not part of any causal chain.  I don't have the answer to how mental causation might influence physical phenomena, and I have no solution to the explanatory gap.  I only want to explore the possibility that; if feelings correlate then they must be in some sense, part of the causal chain.  By "causal chain" I mean there is a series of causal influences, whether they be classical or quantum scale. 

Just one note on what is meant by causal chain.  A causal chain can:
(1) be all on one physical level.  Transistors on a chip represent a causal chain at a single level in which one transistor influences another. 
(2) move up or down levels such as transistors and transducers causing computers to change state.  The computer's change of state causes valves and pumps to operate which causes fluid to flow in a predetermined mannor, eventually leading to a rocket being launched. 

It seems to me we're much closer to agreement than you realize from the remainder of your response.  I would also like to point out that your symbol grounding problem is perfectly acceptable to me.  However, I might have a slightly different perspective.  In my view, there is a meaning in my head that must be converted to symbolic form in order for there to be a physical transmission of that meaning.  Neurons in my brain convert the meaning into a symbolic vocalization.  That vocalization has no meaning itself, it is simply a symbolic representation of the meaning I had in my head.  There is a transmission of symbolic acoustic pulsations in the air that travel out away from my mouth in a spherical wave front.  A pair of ears converts these pressure waves in the listener's ear to neuron interactions and then back into meaning in someone else's head.  However, if one claims mental causation is not part of the causal chain, I will claim that the above interpretation is unacceptable.  For anyone who refuses mental causation, they must deny the above is true because feeling has no causal influence and thus can not make itself known.  I can not reliably report my feelings! 

Perhaps using Kim's framework for a causal chain, and how mental causation is problematic, might help ground the conversation.  Here is my very brief overview as I understand the salient points:
We will assume a physical basis P at time t produces mental state M (or any higher level property) at that time (mind-body supervenience).  Any identical physical basis P should have an identical mental state M. 

Similarly, at time t + dt, physical basis P* causes mental state M*. 

Kim says, "M* is instantiated on this occasion: (a) because, ex hypothesi, M caused M* to be instantiated; (b) because P*, the physical supervenience base of M*, is instantiated on this occasion. "

In other words, if we want to claim that M causes M* we have two causes for M*.  We have P causing P* on which M* supervenes and we have M causing M* directly.  We can't have 2 causes, so we rule out mental causation.  Similarly, we might suggest M* causes P* (downward causation) which again results in 2 causes for M*.

If we refuse mental causation, we can't accept that M causes M* nor can we accept that M* causes P*.  All we can accept is for P to cause M and P* to cause M*.  Nothing more, right?  No wait, there's one more thing you want but you don't say it.  You've hidden it behind the 'telekenisis' door.  ;) 

You want M to correspond to P AND do so reliably.  You want there to be a purely functed report of M by P. 

The problem as I see it isn't that you refuse M causing M*.  That much is fine, I agree.  That is one mental state causing another.  We don't need that since the supervenience base P is sufficient to produce M.

The problem as I see it is that in order for there to be a RELIABLE report of M (or M*) by P (or P*), P must not only have knowledge of P, it must ALSO have knowledge of M.  However as soon as you allow that P can reliably report M, you've admitted to M being part of the causal chain.  M is being acted upon when reported by P.  In order for P to reliably report M, P must funct a description of M in some way.  How does P create this description of M?  How can P funct a description of M if P can't implement a physical interaction - a physical interaction that is able to correlate that functed description with M? 

It's no good saying P reports M because P is reporting about itself because M can not be physically measured.  If you want to now claim that if we measure the momentum or pressure or some other physical property closely enough then we'll be able to measure feeling, I disagree.  M can not be found out about by observing physical interactions, so P can't find out about M by monitoring it's own physical interactions, P can only (at best) report it's own physical interactions.  Also, there is no meaning associated with those physical interactions, and when I say meaning here I mean that the physical interactions are not meaningful per your symbol grounding problem (which I agree with).

One might contest that by monitoring neural interactions we can determine what felt experience is occuring, but to do this we need emperical correlations.  We need to know that when certain neurons fire, this firing of neurons correlates to reported feelings, so this is no help for us.  We are not measuring feeling, we are measuring something physical and we are making the assumption there is a reliable correlation.  If there were any way of monitoring neural interactions and determining what feeling those neurons were experiencing, there would be no explanatory gap and we could stop arguing about feelings and explanatory gaps.

Just to point out where IMHO Kim goes wrong on all this, Kim said:

KIM: Suppose that pain could be given a functional definition -- something like this: being in pain is being in some state (or instantiating some property) caused by tissue damage and causing winces and groans.  Why are you experiencing pain?  Because being in pain is being in a state caused by tissue damage and causing winces and groans, and you are in neural state N, which is one of those states (in you, or in systems like you) that are caused by tissue damage and that cause winces and groans.  Why do people experience pain when they are in neural state N?  Because N is implicated in these causal/nomic relations, and being in pain is being in some state with just these causal/nomic relations.  It is clear that in this way all our explanatory demands can be met.  There is nothing further to be explained about why pain occurs, or why pain occurs when neural condition N is present.

Just as Kim tries to convince us that pain correlates with tissue damage, winces and groans, our intuition tells us this is true.  However, we need to stop thinking our intuition is leading us only to true conclusions.  If we can't logically support this intuitive deduction, we must stop using our intuition. 

Best regards,
Dave.

(Posted 5/3/09, 9:30 PM EST)


2009-05-05
The 'Explanatory Gap'
Reply to David Chalk

I wanted to summarize my last post for clarity:

Summary:  In order for P to reliably funct a description of M, there must be an interaction between P and M.  For P to report anything truthful whatsoever about M, even to report the presence of M, there must be a way for the physical supervenience base P to obtain information about the existance of M. 

For M to be non-causal, P can be the supervenience base but there can not be any flow of information about M to P or else M has entered the causal chain.  If there is no way to obtain information about M then there is no reliable correlation and what P reports about M is unreliable and M becomes non-causal.

Conclusion: For M to be non-causal, M can not be reliably reported by P.  If M is reliably reported, M is causal in the sense that it enters the causal chain.

I’d be interested in exploring possible arguments which avoid this conclusion.  (Posted May 4, 2:30 pm est)


2009-05-05
The 'Explanatory Gap'
Reply to David Chalk
David, your treatment has become a bit too complicated for something that should be kept simple if there's to be any hope of gaining any new insight at all. 

The answer to (what I think is) your question -- "How can feelings be there, reliably correlated with the functing, and yet not be in the 'causal chain'?" -- is this: Both the feeling and the correlated functing have a common cause (the functing unproblematically, the feeling inexplicably), and that common cause is functing too. The felt effects of the functing are correlated with the functed effects of the functing, but only the functed effects are, in their turn, causal. The feelings just dangle -- correlated, but lacking any causal power of their own. And that's the explanatory gap.



2009-05-07
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan, is feeling a physical brain event or a non-physical event?
... AT









2009-05-07
The 'Explanatory Gap'
Reply to Arnold Trehub

THE EXPLANATORY GAP IS EPISTEMIC, NOT ONTIC

AT: "is feeling a physical brain event or a non-physical event?"
Feeling is an (inexplicable) effect of physical brain events. No use fussing over whether or not it's "physical" (of course it is, somehow): the problem is with explaining its causality (how? why?). That's the mind/body (feeling/function) problem, and it's an explanatory gap, not a pretext for ontologizing about whether there are one or two kinds of "stuff." Even if God sent a messenger and reassured us that everything was strictly physical, that would not answer the how/why question about causality, hence it would not close, nor even narrow, the explanatory gap one bit!

(By the way, I have a response to your earlier, longer posting underway. Just need the time to put some finishing touches on it!)


-- SH

2009-05-07
The 'Explanatory Gap'
Reply to Derek Allan
Hi Derek

I am not famous for popularity-winning answers to simple questions, but perspective does all the same tend to add gravitas. First, the term "functor", analogous to the sociological "actor", appears in Audi's Dictionary of Philosophy, and Eco (Theory of Semiotics, 1976) utilizes "functive" not too unlike the actor-active correspondence. An actor is a stylized 'any-given-person-able-to-act' affair, and "active" denotes what is or has happened when the actor acts--when, in the progressive tense, the actor "is, or has been, or will have been acting", or when, in a different context, when the brain is functioning. I really cannot pretend to know how Monsieur Harnad strictly defines "functing" as distinct from "functioning", but that seems to me the right question to pose to him. As to his "coinage" or neologism, I have nothing to say unless I can myself coin a 'progressive' whenever I wish upon any given antecedent stem. I haven't had the balls to go there. Yet.

CSH

2009-05-07
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Feeling is an (inexplicable) effect of physical brain events. No use fussing over whether or not it's "physical" (of course it is,somehow): the problem is with explaining its causality (how? why?)."

Stevan, if feeling is a physical brain event, what exactly is your reason for asserting that feeling is a causally inexplicable brain event? (You might say unexplained, but inexplicable?!)


.. AT












2009-05-07
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "what exactly is your reason for asserting that feeling is a causally inexplicable brain event? (You might say unexplained, but inexplicable?!)"
Arnold, you are right that there are two distinct things one can say here, and I am in fact saying them both: 

(1) Unexplained. That there is no explanation of how-and-why we feel is, I think, uncontested and incontestable. The only explanation would be an account of how feelings are caused by the brain, and what effects they have, and there isn't one. 

(2) Inexplicable. That there cannot be a causal explanation of how-and-why we feel is just an argument: I have argued that it follows from the fact that (a) functions and feelings are correlated but incommensurable and (b) that there is neither need nor room for feelings to be independent causes (except if telekinetic dualism were true, which it is not), because the four fundamental forces cover all of causality, which is all of functionality. Hence if brain function does somehow cause feelings in some mysterious way (as it is virtually certain that it does, and I of course believe it does), feelings are doomed to just dangle, functionally superfluously, having no independent causal power of their own, all effects we feel as being caused by feelings being in reality caused, and hence fully explained by the brain functions (and brain I/O) that (mysteriously) cause the feelings. This leaves the feelings dangling, inexplicably. An explanatory gap.

Arnold, with apologies, I hope I will be able to finish my longer response to your earlier, unanswered pointing N - 2 this evening!


-- SH

2009-05-08
The 'Explanatory Gap'
Reply to Arnold Trehub

WHY WOULD TURING-INDISTINGUISHABLE ZOMBIES TALK ABOUT FEELINGS (AND WHAT, IF ANYTHING, WOULD THEY MEAN)?

AT: "you assert that feelings are caused by the brain"
I said that (for what it's worth) I believe that feelings are caused by the brain almost as confidently as I believe that apples are caused to fall by gravity. The difference in confidence is because we can explain causally how apples fall (we understand universal gravitation) but we cannot explain causally how the brain causes feelings.

I also said that I do not believe it is possible to explain causally how the brain causes feelings (but all I gave to support that belief was negative evidence [that telekinetic dualism is false] plus a methodological argument [incommensurability].
AT: "you assert that feelings have no causal consequences"
I asserted that in the form of the empirical fact that telekinetic dualism is false: All causal consequences of brain activity are causal consequences of the four known forces. There is no fifth force (feeling). 

It is a fact -- an unexplained fact but a fact -- that we feel, and it is almost certain that our feelings are caused (mysteriously) by our brains. But as feeling is not an independent fifth force, whatever feels as if it it is caused by feelings is actually caused by the brain (which also [mysteriously] causes feelings). 

The paradigmatic example is the feeling that my finger moved because I willed it. It does indeed feel that way, but all evidence is that it moved because of activity in my brain -- perhaps the same activity that (mysteriously) caused the feeling that my finger moved because I willed it. 

Feelings have no causal consequences; it is only what (mysteriously) causes feelings that has causal consequences. It only feels as if the feelings are the causes.

It is for this reason that although it is a mystery -- and I think an unresolvable mystery -- how we feel, it is an even bigger mystery why we feel. For it looks as if everything that we do that is accompanied by feelings -- including the feeling that the doing is happening because of those feelings -- can be done without feelings: Indeed, the fact that the doing is accompanied by feeling is not an explanatory aid (apart from the fact that it squares with how we feel when we do): Rather, it is an overwhelming explanatory burden, because we cannot explain either how feeling is caused by the brain or what feeling itself causes that is not already caused by whatever (mysteriously) causes feeling. 

This might help set intuitions: I don't think anyone will deny that if the human species were able to do all it can do -- talk, learn, teach, socialize, invent, do science and engineering, write history, biography and fiction, etc. -- but it did not feel, then there would be no mind/body problem or explanatory gap. Things would be much more straightforward: Cognitive neuroscience would only need to explain the (formidable) capacity of this hypothetical insentient species to do and to say all that our own species can do and say, but not the fact that they feel (because they do not feel).

(I am not here suggesting that Zombies are possible: I am just trying to highlight the extra explanatory burden that the undeniable fact of feeling imposes on causal explanation. It should be clear that the existence of feelings is a liability rather than an asset for causal, functional explanation.)

Now I said things would be a lot more straightforward, explanatorily speaking, if there were no feeling, just doing -- if all "functing," nonbiological and biological, were just unfelt functing. There would, however, be an unresolved puzzle even then -- though it would not be a causal puzzle: Why would such an insentient species speak of feeling at all? Why would they say "I am feeling tired" rather than just "I am tired" (meaning my body is fatigued)? (I don't think there would be any problem with the use of the indexical "I" by such a species, by the way, despite all the fuss some make about the concept of "self" and "self-consciousness": the trouble, as usual, is with the felt aspect and not the functional aspects of "selfhood.")

Possibly the feeling vocabulary would be useful as a shorthand for speaking of internal states in the speaker and others. After all, internal states are just as invisible as mental (i.e., felt) states. "Feeling happy" and "feeling sad" may all have internal functional counterparts in the sort of "mind-reading" that this twin species would still have to be able to do, if it were to have the same adaptive social and verbal capacities as our own species. (To "feel happy" might for them be an internal state that was relatively free of processes correlated with actual or impending tissue damage, or free of data predictive of other current or future untoward adaptive consequences, and/or correlated with the attainment, or the impending attainment, of a functional goal, perhaps related to survival, reproduction, competition, or social success: all of these make sense as purely adaptive, functional categories, in a Darwinian survival machine, irrespective of whether it just functs them, or also feels them as it functs them.) 

Maybe even the locution "I am sincerely sorry," uttered in its pragmatic social context, has a purely functional role to play, even for a Darwinianly successful Zombie; and the only reason we find that counterintuitive is that we do feel, and find it difficult even to imagine what it would be like not to -- with good reason, because "be like" means "feel like," and of course it would feel like nothing, "feeling" being an uncomplemented category. (Thus does the fact of feeling not only create the mind/body [feeling/function] problem and the gap in causal explanation, but the anomalous nature of "feeling" as a category adds a further sense of "mystery" to the explanatory gap: 

A tougher distinction in such a Zombie species would be the distinction between Zombie psychopaths (who, like our psychopaths, purportedly do not feel guilt or remorse) and Zombie normals, who purportedly do. But I think that it only takes a little reflection to see that there are behavioral and functional distinctions between our psychopaths and normals that could, in Zombie psychopaths and normals, be based on responsiveness to certain internal states, without the internal states having to be felt states. (These behavioral and strategic distinctions might even be relevant to explaining functionally why the psychopath genotype exists at all, in our sentient species.)

(Note that, because we do feel, we have trouble imagining a species saying and doing the same things we say and do, but without feeling. But the real trouble is in the other direction! It is the Zombified version of feeling-talk and feeling-action that has the straightforward functional explanation, and the feeling that is the a-functional dangler, not the other way round!)

So what about "the mind/body problem" itself? Would philosophers in this hypothetical insentient species still ponder and argue over the causal power of feeling when they in fact have no feeling, and the only referent for "feeling" in their discourse is "internal functional state"? Would Zombie philosophers "know" that for them, there was no distinction between felt and unfelt functing? Would they really have any knowledge at all, as opposed to mere know-how, given that they are incapable of more than lip-service to the Cartesian "sentio ergo sentitur"? The cogito does not work, after all, for inferred states: It only works for felt states. (That's the quintessence of Descartes' method of doubt.)

Some may want to conclude that this puzzle is in fact evidence for the causal power of feeling after all, for only a species that actually felt could engage in discourse about the feeling/function problem coherently! 

I'm inclined to conclude otherwise. I happen to doubt that there could be a feelingless ("Zombie") species (natural or artificial) that was nevertheless Turing-Indistinguishable from ourselves. If they were really feelingless, there would be other differences in what they did and said. And what squares our own species' discourse with our feelings is whatever it is in our brains that keeps our feelings so correlated with our functing: It is not an independent causal consequence of the fact that we feel, but a consequence of the common (functional) cause of both our doings/sayings and the feelings that they (mysteriously) generate as a lockstep accompaniment. 

So the question of how and why we feel (which is exactly the same as the question of how and why we are not just Darwinian Zombies) also leads to the question of how and why there could not be Zombies that were Turing-Indistinguishable from us -- if there could not be. For if there could, then the mystery could be just due to some (colossal) evolutionary quirk or coincidence in the case of the terrestrial biosphere. If there could not be Zombies, then the mystery could be a fundamental principle of functional organization that we will never know or understand, because the felt component will always be functionally superfluous under any causal explanation that does not cheat or beg the question.
AT: "you are claiming feelings are either (a) non-physical events caused by the brain in a dualistic universe and naturally have no causal consequences for subsequent brain activity, or (b) they are physical events cause by the brain but have no causal consequences for subsequent brain activity. Which case (a or b) do you endorse?"
I hope it is clear by now that I endorse (b) and add only that I think that how the brain causes feelings is also inexplicable, because of the incommensurability of function and feeling, despite their correlation. (I invite others to attack me on this, and force me to defend it more rigorously: Is it coherent to say "correlated yet incommensurable"?)
AT: "[You say] that in order to explain why we feel we would have to show that feelings have causal consequences." 
Indeed we do, otherwise feelings remain the mysterious, unexplained dangler they are -- and the explanatory gap gapes.
AT: "Am I correct in assuming... you believe we can explain how the brain causes feelings, but we are unable to explain why the brain causes feelings?
No, I don't believe we can explain how the brain causes feelings either (but I do believe the brain causes feelings). I do not, however, believe that feelings cause anything else: As I said, there's no causal room. Hence here it is not a matter of an actual causation that we cannot explain (the way we cannot explain how the brain causes feelings, even though it undoubtedly does) but an inexplicable lack of causation, making it inexplicable why we feel.


-- SH












2009-05-08
The 'Explanatory Gap'
Reply to Stevan Harnad
There's' something bothersome in all this talk (which I confess I have not followed) about brains causing or not causing feelings etc - as if one were dealing with two 'things' and it were just a matter of working out the relationship between them.

The brain is a thing - granted. But what are feelings? They are closely related to thoughts, surely. And what are thoughts? Things? In what sense? Where are they located? What colour are they? How big? What shape?

I'm sorry, but all this talk about 'functing' (ghastly word!) and feeling etc strikes me as the kind of pseudo-scientific speculation that analtyic philosophy, for all its claims to rigour, seems to me to indulge in far too often.

(Though, it is Friday and the end of the week. Perhaps I'm just feeling jaded...)

DA 


2009-05-10
The 'Explanatory Gap'
Reply to Stevan Harnad

Hi Stevan.  I've read through a number of your papers but I can't find an explanation of why feelings are inexplicable.  If I'm wrong on that, I apologize, perhaps you can point out where you provide this explanation.  Consider that someone in DJC's (1) category above might claim that once science has explained how and why all the neurons and glia cells in our brains interact, once an explanation of every molecular interaction has been provided, there is nothing left to explain.  So why do you think feeling inexplicable?

I've tried to answer this one myself above, but will attempt a summary here.  First, I'd define feeling (or experience or qualia) to be a phenomena.  It is something that happens.  It is an event which occurs at a specific time.  In addition, we know this event is supervenient on the brain so at least we can know in general terms - where this event is happening.  However, I'd agree with Leibniz who says, "Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions."  We can determine what causes physical interactions, and we can use mathematics to describe the motions of things, but this method of describing things is utterly useless in defining feelings.  Feelings do not give way to mathematics, it would appear that only the physical substrate can be defined using mathematics.  So feelings are not explicable on mechanical grounds, otherwise we could define feelings in mathematical terms. 

Physical phenomena in comparison, are explicable on mechanical grounds and mathematics has proven to be applicable to those phenomena without exception.  Any physical phenomenon is an easy problem.  Any mental phenomenon is a hard problem.  Feelings are mental phenomena, which are separate and distinct from physical phenomena.  A mental event is not a physical event, although it is reasonable to conclude that mental events supervene on physical events. 

I think the Turing test (TT) is an excellent way of viewing this difficulty.  Normally, scientists and engineers such as myself will "measure" something by determining how that thing physically interacts with another thing.  When a correlation is found, and a mathematical explanation provided, we can say we have explained the phenomenon.  Models that use mathematics may not be perfectly accurate, such as Newton's laws, but any more accurate theorem will always need to explain the less accurate model when constructing a more accurate one.  In comparison, the TT isn't a test like that at all.  The TT isn't even a test!  It does not check for the motion of parts and there is no mathematical treatment that can be associated with it.  It simply isn't a test in any scientific or engineering way.  The TT is a non-starter for that reason.  It doesn't test anything.  I would however agree with what you've said here:

SH: I further distinguish meaning, in this felt sense, from mere grounding, which is yet another property that a mere book or computer lacks: Only a robot that could pass the robotic Turing Test (TT; the capacity to speak and act indistinguishably from a person to a person, for a lifetime) would have grounded symbols. But if the robot did not feel, it still would not have symbols with intrinsic "intentionality"; it would still be more like a book or computer, whose sentences are systematically interpretable but mean nothing except in the mind of a conscious (i.e., feeling) user.

I like the way you put that: the robot has grounded symbols, but we still have a symbol grounding problem because we haven't provided a test to see if those symbols are in some way intrinsic and can therefore have meaning and produce feeling.  So I conclude that the TT isn't a test at all.  We're stuck with mental events being distinct from physical events and untestable, and that is why IMHO the explanatory gap is so difficult and feelings are inexplicable. 

I'd be interested in understanding why you say that feelings are inexplicable.

Best Regards,

Dave.


2009-05-10
The 'Explanatory Gap'
Reply to David Chalk

HOW AND WHY FEELINGS ARE INEXPLICABLE


DC: "I've read through a number of your papers but I can't find an explanation of why feelings are inexplicable." 

They are inexplicable because explanation is causal (functional) explanation, and we cannot explain (1) how (functionally) the brain causes feelings (even though it undoubtedly does), because feelings are incommensurable with function, and we cannot explain (2) why (functionally) the brain causes feelings, because there is no causal room for feelings themselves to have any effects (hence any function) apart from the effects and function of whatever (mysteriously) causes feelings.

Apart from that, all I can give is examples of the way functional/causal explanation of both how and why is always destined to fail:

Example 1: The reason tissue damage is felt (as pain) rather than just processed (as stimulus avoidance, etc.) is that the felt pain signals the organism to avoid the stimulus. (Explanatory Gap: Why is the signal to avoid the stimulus (etc.) felt, rather than just functed? And how is it felt, rather than just functed?)
Example 2: The reason we hear sounds rather than just process acoustic signals is that we have to select which sounds are relevant. (Explanatory Gap: Why is the selection felt, rather than just functed? And how is it felt, rather than just functed?)
Example 3: The reason it is important that we understand what sentences mean, is that we have to be able to act in accordance with what they mean. (Explanatory Gap:  Why is the understanding felt, rather than just functed? And how is it felt, rather than just functed?)

Etc. You will find that if the goal is to explain how or why a function is a felt function rather than just a "functed" function (with exactly the same functionality), it will always turn out that there is no independent functional role that can be attributed to the fact that it is felt: The same thing, unfelt, would be functionally equivalent. And it is not an explanation to insist that it is just some sort of "brute fact" about certain functions that they just are felt functions. That may well be the case. But we were looking for a causal/functional explanation of how and why, not merely a mysterious assertion that!

That's the explanatory gap: It's an epistemic gap, not an ontic one.

DC: "...someone in DJC's (1) category above might claim that once science has explained how and why all the neurons and glia cells in our brains interact... every molecular interaction... there is nothing left to explain."  

They can claim that. But it does not answer our how/why question, hence it leaves the explanatory gap fully agape.

There are two ways to construe the claim than there is "nothing left to explain." 

One is that we cannot explain any further. That, I think, is quite correct (because feeling and function are incommensurable and because there is no room for feelings to have causal power of their own, over and above the causal power of the functions that [mysteriously] cause them). 

The other is to say that therefore everything has been fully explained. That, I think, is obviously false, since we have not explained how or why some functions are felt. Yet it is a fact that they are felt. And it is as natural as can be to ask "how and why?". To reply that it is simply a (mysterious) brute fact of nature is not to reply at all, hence to leave it unexplained. 

Hence the explanatory gap.

DC: "First.. feeling... is something that happens... at a specific time...supervenient on the brain so... we... know... where..."

I find the weasel-word "supervenience" as vacuous and ineffectual as all the synonyms and paranyms of "feeling" ("consciousness," "qualia," "mind," etc. etc.) that we love to fall back upon when we have nothing substantive or new on offer: We feel. That's a cartesian certainty. Hence there are feelings. Sentitur. Based on everything else we know about the world, it's of course the brain that causes feelings. The question is: how? and why? 

Replying that feelings "supervene" on brain function adds absolutely nothing.

DC: "I'd agree with Leibniz... [that it is] inexplicable on mechanical grounds... in mathematical terms..."  

David, I wonder why -- if you agree with Leibniz that feeling is inexplicable -- you are asking me to explain how/why feeling is inexplicable! But I hope I have by now explained it: Because we cannot say how or why we feel rather than just funct; how/why are functional questions.

DC: "Physical phenomena in comparison, are explicable... an easy problem. [Explaining feeling is] a hard problem.. not a physical event, although it... supervene[s] on physical events."  

Yes, a functional/causal explanation of everything other than feeling is (in principle) an "easy" problem: normal science and engineering. Explaining how and why we feel is not just "a" hard problem, but the hard problem (and, in my opinion, insoluble). 

(On the other prominent candidate for being a "hard" problem -- "duality" in quantum mechanics -- I can only plead nolo contendere, for want of the technical expertise even to judge how much of a problem it is, whether or not it is soluble, and if so, how and why.)

But the only thing that is being said in saying that the feeling/function problem is "hard" is that all other scientific and engineering problems are functional (and often also mathematical), but that those resources are ineffectual for explaining how and why some functions are felt -- for the (simple!) reason that "how/why" are functional, causal questions, and (except on pain of telekinetic dualism), feeling has no causal (hence no functional) power.

DC: "[T]he TT isn't a test... in any scientific or engineering way... [it] does not check for the motion of parts... no mathematical treatment... a non-starter..."  

I think you are profoundly wrong about that. Candidates for passing the TT will be designed by human beings; the candidates will have moving parts, and both dynamic and computational processes, known to the designer. 

What the TT tests is performance capacity. It of course cannot test whether the successful candidate feels. But that's part of the point of the TT. It is an embodiment of the explanatory gap: We will never know whether or not a successful candidate feels (only the candidate can know); and if it does, we will never know how or why.

DC: "I like the way you put that: the robot has grounded symbols, but we still have a symbol grounding problem because we haven't provided a test to see if those symbols are in some way intrinsic and can therefore have meaning and produce feeling."    

Alas, you misunderstood me. A TT-passing robot certainly has grounded symbols, which certainly solves the symbol grounding problem. But grounding is not meaning, And only a TT-passing robot that feels would have intrinsic meaning. 

In other words, not only is systematic interpretability insufficient for grounding, but robotic grounding (even TT-scale) is not sufficient for (intrinsic) meaning, unless it generates feeling. But we have no way of knowing -- let alone explaining -- whether, how or why a TT-robot (or any functional system) feels rather than just functs.

DC: "So I conclude that the TT isn't a test at all."    

Of course it's a test: a test for having functionally explained our total performance capacity. It is not, however, either a test or an explanation for your feeling capacity.

DC: "We're stuck with mental events being distinct from physical events and untestable, and that is why... the explanatory gap is so difficult and feelings are inexplicable."    

You're back into the verificationist observationalism I pointed out before: The problem is not the untestability. (The TT robot might feel, after all.) The problem is with inexplicability. And that problem arises from causality and causal explanation, not from some sort of physical/mental "dualism" (which explains nothing, but merely gives yet another name to the explanatory gap.)

DC: "I'd be interested in understanding why you say that feelings are inexplicable."   

I hope this time I have succeeded in conveying an understanding!

-- SH



2009-05-11
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan, this seems to be what you are proposing:

1. Physical functions are explicable.
2. Brain functions are physical and therefore are explicable.
3. Feelings are physical but are (mysteriously) inexplicable.

Isn't this mysterious inexplicability of feelings a direct consequence of an incoherent argument?

.. AT


2009-05-11
The 'Explanatory Gap'
Reply to Arnold Trehub

AT: "Isn't this mysterious inexplicability of feelings a direct consequence of an incoherent argument?"

I'm afraid not, Arnold. It's a direct consequence of the peculiar nature of feelings. That peculiar nature can of course be blithely disregarded, but only at the price of begging the question, insofar as the "hard problem" is concerned...

-- SH



2009-05-11
The 'Explanatory Gap'
Reply to Stevan Harnad
Could someone remind me please what the 'hard problem' and the 'easy problem' are?

DA

2009-05-11
The 'Explanatory Gap'
Reply to Derek Allan
DA: Could someone remind me please what the 'hard problem' and the 'easy problem' are?

Hard Problem: Explaining how and why we feel.

Easy Problems: All the rest of the problems of science, mathematics and engineering (except maybe quantum duality).


-- SH




2009-05-11
The 'Explanatory Gap'
Reply to Stevan Harnad
But If I recall, that is not the 'hard problem' or the 'easy problem' as Chalmers defines them? (He is the source of the phrases, I think?)



2009-05-11
The 'Explanatory Gap'
Reply to Derek Allan

WHAT IT FEELS LIKE TO FEEL: APPLYING OCCAM'S RAZOR TO THE MIND/BODY (FEELING/FUNCTION) PROBLEM  


DA: "But If I recall, that is not the 'hard problem' or the 'easy problem' as Chalmers defines them?"
Chalmers is talking about the same problem, the mind/body problem. Putting it in the language of a causal explanation of the "how/why" of feeling is my own way of putting it, but it's exactly the same (age-old) problem. If it sounds like a different problem, that just shows how the way we put it can fool us (including fooling us into thinking that we have found a "solution" -- or that there is no problem, or more than one.)

Let me do a reductive transcription of Chalmers's way of putting it. (And let me note that his is already one of the simpler, more economical, and direct ways of putting it, even before I apply Occam's razor and a little anglo-saxon uniformity.)
DC: The really hard problem of CONSCIOUSNESS is the problem of EXPERIENCE. When we think and perceive, there is a whir of information-processing, but there is also a SUBJECTIVE aspect. As Nagel (1974) has put it, there is something it IS like to be a CONSCIOUS organism. This SUBJECTIVE aspect is EXPERIENCE. When we see, for example, we EXPERIENCE visual sensations: the FELT QUALITY of redness, the EXPERIENCE of dark and light, the QUALITY of depth in a visual field. Other EXPERIENCES go along with perception in different modalities: the *X* sound of a clarinet, the *X* smell of mothballs. Then there are bodily SENSATIONS, from pains to orgasms; MENTAL images that are conjured up internally; the FELT QUALITY of emotion, and the EXPERIENCE of a stream of CONSCIOUS thought. What unites all of these states is that there is something it IS like to be in them. All of them are states of EXPERIENCE.
Cutting out the redundant and superfluous parts:

"The really hard problem of FEELING is the problem of FEELING. When we think and perceive, there is a whir of information-processing, but there is also a FELT aspect. As Nagel (1974) has put it, there is something it FEELS like to be a FEELING organism. This FELT aspect is FEELING. When we see, for example, we FEEL visual sensations: the FEELING of redness, the FEELING of dark and light, the FEELING of depth in a visual field. Other FEELINGS go along with perception in different modalities: the *FELT* sound of a clarinet, the *FELT* smell of mothballs. Then there are bodily FEELINGS, from pains to orgasms; FELT images that are conjured up internally; the FEELING of emotion, and the FEELING of a stream of FELT thought. What unites all of these states is that there is something it FEELS like to be in them. All of them are states of FEELING."

(Note the slightly odd-sounding special case of how we speak of some of our sensations: We say we feel surface textures, heat, emotions, but to distinguish the sense modalities, we say we see (rather than feel) colors, hear (rather than feel) sounds, smell (rather than feel) smells, etc. That the invariant in all of these is in reality still feeling (and the variation is just in what it feels like, not in whether it feels like something at all), all of these instances can be readily replaced by a still more perspicuous variant of Tom Nagel's already more perspicuous way of putting it, which is "what it feels like to X": what it feels like to see, hear, smell, etc. That is, and always was, the essence of the mind/body -- feeling/function -- problem, just as "sentio ergo sentitur" ("I feel, therefore there is feeling going on") was always the essence of Descartes' cogito.)


-- SH




2009-05-12
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "Isn't this mysterious inexplicability of feelings a direct consequence of an incoherent argument?"

SH: "I'm afraid not, Arnold. It's a direct consequence of the peculiar nature of feelings. That peculiar nature can of course be blithely disregarded, but only at the price of begging the question, insofar as the "hard problem" is concerned..."


Stevan, I don't doubt the peculiar nature of feelings. I believe, as you do, that feelings are very special aspects of our nature! But it doesn't advance our understanding to simply assert that feelings are inexplicable even though feelings are physical events caused by physical brain processes. What I would like to know is your principled reason for your claim that feelings are inexplicable. It won't do to simply say that feelings are inexplicable because feeling and function are incommensurable, because that is just another way to say that the systematic relationship between the brain function that causes feelings and the brain state that constitutes a feeling is inexplicable. Why, exactly, do you believe that the brain states that constitute our feelings can't ever be explained?


..AT


  

2009-05-12
The 'Explanatory Gap'
Reply to Stevan Harnad
Thanks for the reference to the Chalmers paper, Stevan. I confess I didn't read it all. I found there were too many unexamined presuppositions in it and I quickly tire of writing of that kind.

I notice (just to focus on one point) that Chalmers relies heavily on the Nagel idea that 'there is something that it is like to be a conscious organism'.   As I think I said in an earlier post, I think this idea just leads to a dead end. First, I notice there is no attempt to distinguish between human consciousness and any kind of animal "consciousness". So, there we have one unexamined presupposition... But more importantly, there is surely nothing it is 'like' to be conscious other than being conscious - which tell us absolutely nothing.

Suppose someone has broken their finger. I once broke my own finger so I could say to this person reasonably truthfully 'I know what it is like to have a broken finger'  (I guess I would compare - in memory - my present painless state with the sharp throb I felt at the time).  But suppose someone says to me. "I am conscious", and I reply "I know what it is like to be conscious".  It's an absurd conversation, is it not?  And for good reason. I haven't anything to compare (human) consciousness with - any more than the person I'm speaking to has. We are like fish in an aquarium: neither of us can imagine what it would be like to be outside. There is no 'broken finger' state (and being asleep, in a coma etc, is not that state: they are simply states in which human consciousness is not operating: they are - to stick with my analogy- simply like a world in which there are no fingers.).  So, to my mind, all the talk of 'what it is like' is just a red herring.   I am amazed it keeps getting hauled out and relied on - as if it told us something enlightening.

I should add that I don't really think that your change of 'consciousness' and 'experience' to 'feeling' makes any material difference. Whatever we call it, we are still left with essentially the same problems.

DA


2009-05-12
The 'Explanatory Gap'
Reply to Arnold Trehub

PUTATIVE FUNCTIONAL EXPLANATIONS OF FEELING: A CHALLENGE

AT:  "Why, exactly, do you believe that the brain states that constitute our feelings can't ever be explained?"
Because in every attempt to explain the functional role of feeling, feeling turns out to be functionally superfluous (except if telekinetic dualism is true, and feelings have causal power -- but it isn't, and they don't).

I long ago made a challenge (the universal "translatability thesis") -- to any linguist who claimed that there was something that could be said in language X that could not be translated into language Y -- that they should tell me (in English) what it was, and why it could not be translated into language Y, and I would show that it could be translated into language Y, even if I did not know language Y.

I hereby make the same challenge for "explanations" of the functional or causal role of feeling: Tell me what it is, and I will show it is functionally superfluous on its own terms. 

(I gave some samples in earlier postings. This is not unlike Dan Dennett's "demoting" mentalistic explanations into mechanistic [usually behavioristic] ones, except that I am not denying the reality of feeling -- just its causal role.)


-- SH

2009-05-12
The 'Explanatory Gap'
Reply to Derek Allan

WHAT IT FEELS LIKE TO FEEL SOMETHING


DA: "Chalmers relies heavily on the Nagel idea that 'there is something that it is like to be a conscious organism'."  

He's right to rely on it: Nagel's was an apt insight.

But, to expose the redundancy and root out the equivocation, it's "There's something it feels like to be a feeling organism."

DA: "there is no attempt to distinguish between human consciousness and any kind of animal 'consciousness'."

No need to distinguish: The feeling/function problem is about the fact that we feel (something), not about what we feel -- whether this or that. 

DA: "there is surely nothing it is 'like' to be conscious other than being conscious - which tell us absolutely nothing." 

First, to expose the redundancy and root out the equivocation, it's "there is surely nothing it feels 'like' to feel other than to feel."

Yup: And your point is...?

DA: "'I know what it is like to have a broken finger'... I would compare - in memory - my present painless state with the sharp throb I felt at the time)."  

"I know what it feels like to feel like I have a broken finger."

But as for comparing your present painless state with the sharp throb you felt the last time: (Strictly [indeed, Wittgenstrictly] speaking, you are now feeling what it seems to feel like to feel no pain and to be feeling a memory of what seems to feel like it once felt like to feel a pain.)

Yup, and your point is...?

DA: "But suppose someone says to me. "I am conscious", and I reply "I know what it is like to be conscious".  It's an absurd conversation, is it not?" 

A: "I am feeling something." B: "I know what it feels like to feel something."

Not absurd in the least (spoken betwixt cognoscenti -- or, rather, sentienti). (Rather more puzzling spoken between Zombies -- however, as noted in a previous posting, it might be functionally adaptive as a way of referring to internal states unobservable to one's interlocutor, even when those internal states are not felt states).

DA: "And for good reason. I haven't anything to compare (human) consciousness with - any more than the person I'm speaking to has."

You are alluding here to the fact that feeling is an uncomplemented category: it is both impossible and self-contradictory to feel what it's like to not feel anything at all -- though it's perfectly possible to feel what it's like not to feel something in particular: to not feel this, but to feel that.

Well, yes, that -- i.e., the "poverty of the stimulus": the fact that we can only sample positive instances of feeling -- does make the category "feeling" all the more problematic, puzzling and troublesome, But it definitely does not make it empty or meaningless.

DA: "being asleep, in a coma etc, is not that state: they are simply states in which human consciousness is not operating" 

Yes, when you are not feeling, you are not feeling. In that sense, "you" are not "there," you're gone. (If Descartes over-reached with his "cogito," in concluding that he existed [sum] rather than just that feeling was going on [sentitur], we can safely, though not cartesianly, say that where [and while] there is no feeling going on, there is nobody home.)

Fortunately, you are reconstituted when you wake up. (A stone is not.)

DA: "I don't really think that your change of 'consciousness' and 'experience' to 'feeling' makes any material difference. Whatever we call it, we are still left with essentially the same problems."

We are indeed. But calling them by one name highlights that they are all one and the same problem...

REFERENCE

Harnad, S. (1987) Uncomplemented Categories, or, What is it Like to be a Bachelor? 1987 Presidential Address: Society for Philosophy and Psychology

ABSTRACT: To learn and to use a category one must be able to sample both what is in it and what is not in it (i.e., what is in its complement), in order to pick out which invariant features distinguish members from nonmembers. Categories without complements may be responsible for certain conceptual and philosophical problems. Examples are experiential categories such as what it feels like to "be awake," "be alive," be aware," and "be." Providing a complement by analogy or extrapolation is a solution in some cases (such as what it feels like to be a bachelor), but only because the complement can in princible be sampled in the future, and because the analogy could in principle be correct. Where the complement is empty in principle, the "category" is intrinsically problematic. Other examples may include self-denial paradoxes (such as "this sentence is false") and problems with the predicate "exists."

-- SH




2009-05-12
The 'Explanatory Gap'
Reply to Stevan Harnad
I'd like to put to your circumstance where your statement:

"Because in every attempt to explain the functional role of feeling, feeling turns out to be functionally superfluous (except if telekinetic dualism is true, and feelings have causal ... power -- but it isn't, and they don't)."

... might be in need of rework. Feelings (P-conscious fields, in general) can be seen as causally effective in 'knowledge change'. I like to be very specific and will choose the circumstance of knowledge change to be that of the scientist who is exposed to radical novelty (there is no existing knowledge of that which is being presented to the scientist by their P-consciousness, eg a mammal with 17 legs). That is, if knowledge can be characterised as:

KNOWLEDGE(t) (1)

then the dynamic term associated with a move from ignorance to apprisal can be presented as:

dKNOWLEDGE(t)
--------------------------- (2)
  dt

Driven by the P-consciousness whiich is the scientific observcation of a novel 17 legged mammal, equation (2) drives equation (1) to 'know' the new species: DIX-SEPT-UPED. So to speak.  In the very specific case of change in the scientific knowledge, the change must be literally causally driven by the P-conscious field, because the state of the scientist's brain consistent with the dynamic revealing of the new knowledge cannot be reached (I mean this in a literal electrochemical/electrodynamical sense) unless the P-consciousness field drove it there. The effect of the phenomenal field P(t) can be detected (tested for) because of equation (2). It operates in an incredibly huge-dimensional state space where the change in possible state trajectories is driven by P-consciousness (states previously unreachable). Physically it's brain electrodynamics, but that doesn't matter at this level.

I confess a degree of frustration at the lack of attention paid to formalising knowledge dynamics (as brain electrodynamics) and the role of P-consciousness in it - especially when we have a perfectly viable way to test for scientific knowledge change. In the context of scientific behaviour, the detectuion of change in scientific knowledge supplies a prima facie case for having objectively detected P-consciousness in a human scientist (or a robot equivalent). Human science, at least, operates this way. A robot capable of the same behaviour delivers a viable claim that the robot has P-consciousness or its equivalent in a way which is independent of any internal arcvhitedtural details.......I have constructed a prototypical empirical test framework along these lines here:

Hales, C. 2009. An empirical framework for objective testing for P-consciousness in an artificial agent. The Open Artificial Intelligence Journal 3:1-15.
http://www.bentham.org/open/toaij/openaccess2.htm

I'd like to commend this line of thinking generally as bearing low hanging fruit. Equations (1) and (2) can be constructed literally into a (very very huge) set of quantum electrodynamics equations - so the formulation has a mathematical basis (as a massive 'projection operator' / 'observable' which reveals a state trajectory which traversed in the act of revealing the new knowledge). Also note that the above "knowledge" dynamics can be generalised to all belief using dynamic doxastics thus:

dBELIEF(t)
----------------   ...   (2)a
  dt

It's just that scientific belief is objectively testable (a test for acquisition of a 'law of nature'), so (2)a is not empirically useful because all they predict is a holder of a belief. NOTE: philosophical 'isms are beliefs of kind (2)a. All you need to do is act scientifically.I commend the knowledge dynamics idea to you. Note there is also a possible formulation for the 2nd derviative....

d^2KNOWLEDGE(t)
---------------- (3)
  dt^2

I leave you to ponder what (3) might be an indicator of....I have my own ideas! Q. What kind of cognitive agency results from (3) = 0?

Thus I find I cannot dismiss the causal efficacy of P-consciousness as easily as you do (not without denying my P-consciousness, and yours, and then demanding it be used for all scientific observation on pain of scientific suicide!). The self-referential application (an act of doing science directed at how we do science) seems to be the key to it. It changes the explanatory domain to one of "what perspective must I adopt in order that what exists (which is described through the P-consciousness of scientists, and is causally efficacious) might be seen to have a 1st person perspective". Which has been rather helpful ... for me, anyway.

regards,

Colin Hales


2009-05-12
The 'Explanatory Gap'
Reply to Colin Hales

Mind the Mind-Fields


(1) There is no coherent, contentful difference between "A-consciousness" and "P-consciousness"  (that's why I insist on just talking about feeling).

(2) If a scientist (or anyone) learns something new (either by observation or because he's told) then all that's happened is that his brain has new data (either sensorimotor or linguistic), and hence new ability to act accordingly (whether behaviorally or verbally).

(3) The problem -- a.k.a. the feeling/function problem or the mind/body problem -- is explaining how and why the gaining or the having of this new knowledge and ability is felt (rather than just "functed," as it would almost certainly be in an "artificial agent," unless it was Turing-Test scale). 

(4) I think you are deceiving yourself with your "phenomenal field P(t)": To formalize a mystery is not to solve it.

(5) The only fields there are are the garden-variety electromagnetic, gravitational etc. fields resulting from the four fundamental forces of physics.

(6) There are no extra "mind fields."


-- SH

2009-05-12
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan, part of our exchange was:

DA:
"there is no attempt to distinguish between human consciousness and any kind of animal 'consciousness'."

SH:  No need to distinguish: The feeling/function problem is about the fact that we feel (something), not about what we feel -- whether this or that. 

My reply:  But precisely. I am talking simply about being conscious (or 'feeling' if you like). Nothing in what I said alluded to what might happen to be the object of consciousness.  My point is that there seems to be an assumption (eg in Chalmers - and I gather you agree) that there is no important difference between being conscious as a human and being "conscious" (can we even use the same word?) as an animal. What on earth could justify this huge assumption?

Also, I'm afraid you're missing the point of my criticism of the Nagel 'insight'. I wrote:

"DA: "there is surely nothing it is 'like' to be conscious other than being conscious - which tell us absolutely nothing."

You replied:
SH: First, to expose the redundancy and root out the equivocation, it's "there is surely nothing it feels 'like' to feel other than to feel." Yup: And your point is...?"

Your change of vocabulary doesn't make any material difference so I will leave that aside. My 'point' is simply, as earlier explained, that to say that something is like itself (which is what this effectively amounts to) is mere verbiage. 

I am frankly amazed.  Can this Nagel 'insight' really be the basis of analytic philosophy's definition of consciousness? I am reluctantly led to think so because I notice that it keeps getting referred to as if it were holy writ...

DA


2009-05-12
The 'Explanatory Gap'
Reply to Derek Allan

WHEREOF ONE CANNOT SPEAK...

DA: "Nothing in what I said alluded to what might happen to be [THE OBJECT OF CONSCIOUSNESS].  My point is that there seems to be an assumption... that there is no important difference between [BEING CONSCIOUS AS] a human and [BEING "CONSCIOUS"] (can we even use the same word?) [AS] an animal. What on earth could justify this huge assumption? Your change of vocabulary doesn't make any material difference so I will leave that aside." 
Here is the transcription into the vocabulary that you think makes no material difference:

"Nothing in what I said alluded to what might happen to be WHAT IS BEING FELT.  My point is that there seems to be an assumption... that there is no important difference between FEELING WHAT a human FEELS and "FEELING" (can we even use the same word?) WHAT an animal FEELS. What on earth could justify this huge assumption? Your change of vocabulary doesn't make any material difference so I will leave that aside." 

As this transcription should illustrate, the change of vocabulary makes it clear that you are talking about differences in what humans and animals may be feeling, whereas what is at issue is whether they are feeling (anything at all).
DA: "to say that something is like itself (which is what this effectively amounts to) is mere verbiage."
No. Reminding ourselves that we all (including animals) feel, and that, stones, (today's) robots -- and just about everything other than people and animals -- do not feel is not mere verbiage. It is perfectly comprehensible and perfectly true (except if one is determined to play the verbal game of Achilles and the Tortoise [or one is unable to do otherwise), in which case further verbiage will indeed make no material difference).


-- SH





2009-05-12
The 'Explanatory Gap'
Reply to Stevan Harnad
Hi Stevan

Re your comment: "As this transcription should illustrate, the change of vocabulary makes it clear that you are talking about differences in what humans and animals may be feeling, whereas what is at issue is whether they are feeling (anything at all)."

Not sure I'm happy with you changing everything to 'feels' etc. We are, after all, talking about consciousness and that's the term that the mainstream of this debate seems to use. In addition you have inserted the 'what's' in the second line which subtly alter the sense of what I wrote. The object of consciousness (or 'feeling') is quite irrelevant to the point I'm making. I am simply saying that one cannot assume that human and animal consciousnesses are the same (an elementary point, surely?). Issues about 'objects' or 'what's' have nothing to do with it.

You also write: "No. Reminding ourselves that we all (including animals) feel, and that, stones, (today's) robots -- and just about everything other than people and animals -- do not feel is not mere verbiage."

But I'm not making a point about reminding anyone of anything. I'm simply suggesting that comparing something to itself (as in the Nagel 'insight') is not likely to prove a very informative step.

DA


2009-05-12
The 'Explanatory Gap'
Reply to Derek Allan

EXTRACTING CATEGORY INVARIANCE FROM POSITIVE AND NEGATIVE INSTANCES


DA: "Not sure I'm happy with you changing everything to 'feels' etc. We are, after all, talking about consciousness and that's the term that the mainstream of this debate seems to use." 
And your point is...?
DA: "one cannot assume that human and animal CONSCIOUSNESS are the same... Issues about 'OBJECTS' or 'WHATS' have nothing to do with it."
Transcription: "one cannot assume humans and animals FEEL the same... Issues about WHAT THEY FEEL or WHAT IT FEELS LIKE have nothing to do with it."

The problem is not the sameness or differences in what they feel; the problem is the fact that they (both) feel anything at all.
DA: "I'm simply suggesting that comparing something to itself (as in the Nagel 'insight') is not likely to prove a very informative step." 
No one is comparing something  to itself. 

We all feel (and we all feel different things during every instance we are awake and compos mentis). Just as we can see daisies, lilacs, crysanthemums, etc. and notice that they are all instances of seeing flowers, we can feel toothaches, and see red, and smell smoke, and notice that there is something (different) that each feels like, but that they all feel like something or other.

There is, however, a profound and important difference between all of our other categories (such as flower, or red) and the special category "feeling," namely, that with categories like red we can sample both positive and negative instances. We can sample instances of both red and non-red things, thereby allowing our brains to detect what the invariant features of the members of the category "red" are: the ones that reliably distinguish them from the non-members. 

In contrast, with feeling, we can only sample positive instances: everything we feel (toothache, what red looks like, what smoke smells like) is an instance of what it feels like to feel, but nothing is an instance of what it feels like to not-feel, because that is self-contradictory. (Note, again, that I don't mean what it feels like to feel sad rather than feel happy, i.e. what it feels like not to feel happy; I am talking about what it feels like not to feel at all.)

It is because of this positive-only instantiation of feeling that the category "feeling" is anomalous. Unlike all other categories, in which we have sampled not just their membership, but also the membership of their complement (i.e., their non-membership), "feeling" (and a few other uncomplemented categories) create certain peristent conceptual problems for us.

But that does not mean that uncomplemented categories are empty. Nor that instantiating them amounts to "comparing something to itself": The positive instances of feeling something (toothache, red, smoke) are all different from one another; so we do have some idea of what is invariant under all that variation. But not as decisive an idea as we have with normal, complemented categories, because there we get to sample the variations and transformations not only among the positive instances, but also the critical transitions to the negative instances, the ones that do not preserve the category invariance. With feeling we cannot do that. In that sense, uncomplemented categories are conceptually incomplete.


-- SH






2009-05-13
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "No one is comparing something  to itself."

In relying on the Nagel dictum that "There is something that it is like to be a conscious organism", one is in effect comparing something to itself, as I explained earlier, because there is surely nothing it is 'like' to be conscious other than being conscious.

DA.  

2009-05-13
The 'Explanatory Gap'
Reply to Stevan Harnad

1) ... I understand the intent of this collapse of A-/P- distinction. Just replace all instances of "P-consciousness" with the "feeling" and there we are .. on the same page.
2).... You are talking to someone up to their armpits in the standard particle model and U1  quantum electrodynamics. I was using the term 'fields' ambiguously ...In cog sci and psychophysics "visual", "auditory" or in general - perceptual fields...are their common parlance. Sorry about the confusion.

The 'dynamics' posting was about a causal role of "feeling" in brain adaptation (learning)  dynamics, specifically in the brain of a scientist undergoing change in "knowledge", where you can objectively relate the result with "feeling" in an empirically cogent way no lesser supported, as a claim, than any other science claim. Denying it puts all scientists in a logically invalid place and no amount of abstract speculation and philosophical muddlement (BTW all mediated by "feeling"!) can make it go away. 

As a scientist I cannot deceive myself into acting as if the causality of knowledge change in scientists does not directly use the "feeling" that literally is scientific observation to constrain knowledge change. Absolutely every scientific act bar none is evidence in support of a claim about causal efficacy of the kind I make. Absolutely zero evidence exists for a valid original authentic empirical science done without “feeling” (=scientific observation) supporting it.  To deny this claim is to construct, using the same causal mechanism of “feeling”, a claim (a change in knowledge of the denier) to the contrary, thus implicitly invalidating the denial and placing the denier  in a logically inconsistent place.

All I can do is bring this approach to your attention and advise that it acts against claims of the causal inertness of “feeling” in ways that force a denier to become logically inconsistent in an empirically testable way. 

None of this has anything to do with explaining the origins of any “feeling” or the particulars of any specific kind of “feeling”. It has to do with scientists waking up to our own special role-relationship with “feeling” and what it tells us about ourselves. It tells us that “feeling is causally inert” is merely a mantra which is very suspect.

This process is rather odd... Here I am , in effect, claiming that "feeling", is literally the brain's solution to the (your!) symbol grounding problem. The act of "grounding" is an act of causal constraint on knowledge change consistent with the "feeling" involved in the representation of the external natural world in a scientist. It's an indirect (2nd order)  causal link, but it's real and testable.

Guess I'll leave it there for your ponderings. 

cheers
colin hales


2009-05-14
The 'Explanatory Gap'
Reply to Derek Allan
DA: In relying on the Nagel dictum that "There is something that it is like to be a conscious organism", one is in effect comparing something to itself, as I explained earlier, because there is surely nothing it is 'like' to be conscious other than being conscious.
Derek, have you ever been asked "What's it like to be a philosopher?" If not, imagine you have. Now, that could be interpreted literally, as "to what experience is the experience of being a philosopher similar?" But I'd suggest that "will you please put into words for me the experience of being a philosopher?" is just as valid. An actual comparison is not really required, though it might well be useful.

Nagel's formulation is similar. It's not actually comparing consciousness to itself, it's just another way of saying that a conscious entity experiences, or feels, as Stevan would say, while one that's not conscious does not: it emphasizes the essentially subjective character of consciousness. Just as there are particular seemings, to uncontroversially conscious things like us, so there is a large and perhaps potentially infinite set of seemings to being any conscious thing: that's what it means to be conscious. Perhaps there is an implied comparison there, between those we know to be conscious, ourselves, and all other conscious things, saying that in that one sense we are all the same, but I think that's a valid point to make. It certainly connects with your complaint about confounding the consciousness of humans and of other species.

2009-05-14
The 'Explanatory Gap'
Reply to Jamie Wallace

"To feel or not to feel, that is the question;
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing, end them. To die, to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to — 'tis a consummation
Devoutly to be wish'd. To die, to sleep;
To sleep, perchance to dream. Ay, there's the rub,
For in that sleep of death what dreams may come,
When we have shuffled off this mortal coil,
Must give us pause. There's the respect
That makes calamity of so long life,
For who would feel the whips and scorns of time,
Th'oppressor's wrong, the proud man's contumely,
The pangs of despised love, the law's delay,
The insolence of office, and the spurns
That patient merit of th'unworthy takes,
When he himself might his quietus (nonfeel) make
With a bare bodkin? who would fardels feel,
To grunt and sweat under a weary life,
But that the dread of something after death,
The undiscovered country from whose bourn
No traveller returns, puzzles the will,
And makes us rather feel those ills we have
Than fly to others that we know not of?
Thus conscience-nous does make cowards of us all,
And thus the native hue of resolution
Is sicklied o'er with the pale cast of cogito,
And enterprises of great pitch and moment
With this regard their currents turn awry,
And lose the name of action."


2009-05-14
The 'Explanatory Gap'
Reply to Colin Hales

HOW/WHY IS OBSERVATION FELT OBSERVATION, AND KNOWLEDGE FELT KNOWLEDGE? (NO QUANTUM-COLLAPSE REPLIES, PLEASE!)


CH: "[I (CH) am immersed] in... quantum electrodynamics..." 

I became a little apprehensive when I read this, Colin, because I was afraid you were going to invoke the alleged causal role of "consciousness" (human [felt] observation) in the collapse of the quantum wave packet. (That would have been a non-starter, for one cannot solve the unsolved puzzles of one field with the unsolved puzzles of another field! But fortunately, I think, you are not taking quite that route here -- though you are coming close!)"

CH: "The 'dynamics' posting was about a causal role of 'feeling' in brain adaptation (learning) dynamics, specifically in the brain of a scientist undergoing change in "knowledge", where you can objectively relate the result with 'feeling'... [T]he causality of knowledge change in scientists... use[s] the 'feeling' that... is [inherent in] scientific observation to constrain knowledge change..." 

There is no doubt that science is based on observations. There is no doubt that observations are felt. There is also no doubt that knowing is felt. But the question was: "How/why are observations (or anything else) felt? What is the causal role of the feeling?" 

(You have not answered that question; you have simply noted the fact that needs to be explained: that observations -- which play a crucial causal role in science -- also happen to be felt observations. Well, yes. And so too are observations that play a crucial causal role in everyday survival and reproduction. But how/why are any of them felt observations rather than just functed observations?

A meter-reading, after all, is a meter-reading (even if it seems to be mysteriously insufficient to collapse a wave-packet unless the meter is read by a feeling observer!). Observations are simply data in computational or dynamic (robotic) processes. Why do the data need to be "felt"?

[I wonder, by the way, why you keep putting "feeling" in scare-quotes: They're real enough, you know! I can safely say "I feel hot." No need for me to say "I 'feel' hot"...]

CH: "[No] empirical science [is] done without “feeling” (=scientific observation) supporting it... [and] abstract speculation and philosophical muddlement [are] BTW all mediated by "feeling"!..."

All true. Feelings are a fact. The correlations are a fact. But now we are waiting for a causal explanation: what causal role does the fact that observations are felt rather than just functed play? (Ditto for knowing.) ("Mediating" is just renaming the mystery: mediating how, why?)

CH: "[The claim of a causal role for feeling in scientific observation and knowledge-change is] empirically cogent [and] no less supported... than any other science claim..." 

So far, the "claim" is only about a correlation between feelings and observations (measurements, data). We have yet to hear what causal (rather than mere -- and mysterious -- correlative) role they play.

CH: "To deny this claim [of a causal role for feeling in observation-based knowledge-change] is to construct, using the same causal mechanism of “feeling”, a claim (a change in knowledge of the denier) to the contrary... that force[s] a denier to become logically inconsistent in an empirically testable way..." 

It sounds like you may be imagining you have some sort of a Cartesian argument there, but I am afraid you do not. 

Feelings (though they are undeniably, cartesianly, there, being felt) have yet to reveal their causal role.  Neither correlating with functional causes, nor feeling as if they're causal will do. (It matters not whether their causal role is discovered, somehow, via empirical observation and causal inference, in the usual scientific way, or their causal role somehow turn out to be a matter of logical necessity or cartesian certainty, via mathematics or the cogito. What's missing, still, is a coherent, viable hypothesis as to what their causal role is -- a hypothesis that cannot be immediately rejected by showing that it is either functionally superfluous on its own terms or draws on an extra telekinetic power that is contrary to all known evidence to date.)

CH: "This... is rather odd [for] I am... claiming that "feeling", is literally the brain's solution to the (your!) symbol grounding problem..." 

I hate to seem ungrateful, but the solution to the symbol grounding problem is sensorimotor grounding: The symbols in a Turing-scale robot -- a robot whose symbols are not only systematically interpretable as being about X (in the way the symbols in a book, computer or toy robot are) but a robot that also has the sensorimotor capacity to interact (behaviorally and verbally) with whatever the symbols are systematically interpretable as denoting, and to discourse about whatever the symbols are systematically interpretable as denoting, Turing-indistinguishably from the way we do -- are grounded. Their semantic interpretability (derived intentionality) is congruent with the robot's interactions with what its symbols are about.

But grounding is not meaning! And, a fortiori, it is not felt meaning, or feeling. So Turing-scale robotic grounding is enough to solve the (easy) symbol-grounding problem, but not to solve the (hard) feeling/function (mind/body) problem.

(By the way, it is not at all evident why Turing-scale robots could not do empirical observation or causal explanation even if they don't feel [i.e., even if their observing is not felt observing]. Grounding sounds like all they need.)

CH: "The act of "grounding" is an act of causal constraint on knowledge change consistent with the "feeling" involved in the representation of the external natural world in a scientist. It's an indirect (2nd order)  causal link, but it's real and testable..."

Sensorimotor grounding is certainly a causal constraint on a symbol system, and if it is Turing-scale grounding it is probably as much as cognitive science (including cognitive neuroscience) can tell us about cognition.

But, alas, it still leaves a gaping explanatory gap.

("Consistent with the feeling" is not the same as "caused by the feeling," any more than "correlated with the feeling" is. And "representations" per se are no help; moreover, if they are felt representations, then they are part of the problem, not the solution: How and why are they felt representations, rather than just functed representations? And I have no idea at all what an "indirect" or "2nd order" causal link means...)

-- SH




2009-05-14
The 'Explanatory Gap'
Reply to Stevan Harnad

Wow. I post a brief aside and I am sucked into the explanatory gap! The epistemic vortex from hell. OK. I’ll try to align the dialogue with the explanatory gap (EG) theme and to avoid talking past each other from our different vantage points. I’ll use the word feeling as the most general expression directed specifically at the ‘what it is like’ subjective qualities.


Scientists.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

When discussing the EG I like to be very specific and refer everything to the scientist and scientific behaviour as a very useful, unique behaviour with which to specifically calibrate the discussion. The scientist is charged with acquisition of objectively testable ‘laws of nature’ (LON). Empirical corroboration of a LON’s predictions puts a scientist in a state of feeling that is scientific observation (eg occipital lobe/visual-‘field’). Contrary to what might have been the impression in my earlier posts: All LON are devoid of causality (this includes QM). LON in the form of the traditional currency of science  are (statistical) descriptions, not explanation. The LON are descriptions (predictive) of how the natural world/scientist combined system feels to the scientist (private, subjectively presented) in the act of scientific observation. Or , in the parlance of the 19th century LON merely ‘organise appearances’. Viable empirical science results when a LON describes (symbolically captures)  the outward (observed) signs of a critical dependency through infallibly concomitant appearances. It is the term critical dependency which is key.

 

OK.

 

LON, Standard Particle Model view of brain material

There is nothing to a brain but (a) nucleons and (b) electrons and (c) space. This (effectively) exhausts the members of the standard particle model involved (do I have to elaborate the nucleons? Nope). There is mass and charge associated with each. They express EM fields in space as a result of the charge carried. Those fields propagate and vortex and have angular momentum (are/can emit radiation). The 2 short acting forces and the gravitational force contribute nothing to the origins of the layered organisation (atom, molecule, cell, syncytia) that are actively involved in the function that gives rise to cognition. Stablisation of the structure results from a multitude of constraints on the motion of these particles in space (chemical bonds etc).

 

Now the meat:

1) ALL of the descriptions of particles and fields and forces was constructed by scientists inside the described system, made of it, using ‘feeling’. The LON that is the standard particle model, QM, electrodynamics etc captures the critical dependencies involved in the organisation of a scientist’s brain.

 

2) All these specialised LON are constructed presupposing the existence of the scientist and the ability (feeling) that is scientific observation. The scientist is implicitly built into the LON

 

I now confine comments to the scientist’s brain. Here’s the EG laid bare:

 

3) NONE of the above LON predict the existence of the feeling that is scientific observation. Let’s call these applicable (condensed matter) particles and forces and fields LON_X.  NONE of LON_X predict a scientist or observing/feeling. All presuppose both or exist in a context of the assumed existence of both. Thus the standard LON have –NIL- content in respect to the existence of 1st person ‘feeling’ in its context of scientific observation. I think you well and truly 'get' that!

 

Which leads me to the final point in the form of a question designed to point to the place where the answer to the explanatory gap actually is.

 

“Q1. What kind of universe must we inhabit in which LON_X describes what a brain looks like in the act of delivering feeling to a scientist?”

 

Answer?   NOT the universe indicated by LON_X.

 

Or, put another way:

 

Describing a universe in which an embedded observer exists who will describe that universe LON_X-ly is not the description LON_X. This unavoidable logic tells us that we have not even begun to describe the universe in the fashion needed to predict a scientific observer of the kind we are, who sees the observation mechanism behaving LON-X-ly.

 

Or, put yet another way:

 

The universe is NOT made of atoms or molecules or cells or subatomic particles. These are the things we perceive it to be made of when we look (feel it) as scientists.

 

Or, put yet another way:

 

The universe is made of organised ‘something’, say Planck_Scale_Thing, which exists PRIOR to an observer, but delivers an observer that reveals the universe to behave LON_X-ly because the observer is inside the universe, made of  Planck_Scale_Thing

 

Or, put yet another rather more pointed way:

 

LON_X in brain material essentially describes electromagnetic fields in space. Q … What perspective must I adopt on the universe such that electromagnetism behaving in certain specific ways (like a brain) makes it acquire a 1st person perspective (from the point of view of BEING the electromagnetic fields that ARE the brain), when elsewhere in the body (such as in the peripheral nerves) it fails to do that?

 

 

This rather awkward non-explanation of ‘feeling’ is as far as I need go for now. What the above tells me is that I can blather on forever about LON_X and I will NEVER leap the explanatory gap. It is a-priori meaningless and any expectation that it can is misguided. This deoes not mean the gap cannot be leapt. It means we haven’t leapt it yet.

 

To leap the explanatory gap is to construct descriptions of  organised Planck_Scale_Thing in such a way as to show how an observer might function. I know I have the right Planck_Scale_Thing when my descriptions start to produce observations consistent with LON_X, such that it reveals itself as the brain material of the (scientific) observer.

 

Because

(1)   no amount of LON_X discussion will ever leap the explanatory gap

(2)   inventing work arounds like “emergence” and “function” and “organisation” and “representation” and “mind-stuff” and “computation”  and “complexity” blah blah blah……is nonsensical magical thinnking: there is literally no such “Thing”. There are nucleons and electrons and space and EM Fields. That’s it. The fact that they are arranged in specific ways and behave dynamically in specific ways adds nothing tangible. If you count the mass, the charge, the numbers of particles and the angular momentum (energies) of the EM field dynamics – it’s all there is (yes it’s lossy! – propagates energy all over the place…a detail).

(3)   Believing that the universe is literally made of LON_X is as bad as (2). It’s made of SOMETHING, though, and that SOMETHING is cannot be mathematical abstractions of the LON_X type because the explanatory gap exists.

 

……..This is as far as I can go. I have been exploring the potential ‘other sides’ of the explanatory gap for a while now. But because the kinds of description involved are totally foreign, to go into it will just look like a whole lot of mumbojumbo no more convincing than (2) or (3) …. I merely bring a perspective on the explanatory gap and a way over it. We’ve been stuck on one side of the gap for 2000 years – since Aristotle because we keep believing extra things about our laws of nature, when indeed the truly explanatory laws of nature are not even in the LON we know and we haven't even started to compile that set (yet), nor are we explicitly (mainstream) aware of the status of our LON as 100% a-causal descriptions pressuposing scientists, with the scientist built into them. 
 

That’s my take on the explanatory gap.
 I’ll leave the causality/grounding issue aside for now. WHEW!

I’m away for a few days out of reach of mail. There’s not much I can add to the above.
Carry on!

Regards,

 

Colin Hales


2009-05-14
The 'Explanatory Gap'
Hi Robin

Thanks for your reply.   You say: "It [Nagel's 'insight'] is not actually comparing consciousness to itself, it's just another way of saying that a conscious entity experiences, or feels, as Stevan would say, while one that's not conscious does not:"

I don't think it advances us one jot to explain consciousness in terms of words like "experiences" or "feels". Those words are far too closely bound up with the idea of consciousness itself to provide any philosophical leverage on the idea. Indeed, they can even be used interchangeably with it. ("I feel a pain in my foot". "I am experiencing a pain in my foot".  "I am conscious of a pain in my foot".)  Can we even begin to imagine what (human) feeling or experience would be like minus consciousness?

I'm not sure I follow the rest of your post. I'm afraid I always get edgy when I encounter the word "subjective". Unless very clearly defined, it's far too vague for my liking. And I just don't know what 'seemings" are.

Quite honestly, the more I think about Nagel's supposed 'insight', the more astonished I am that it has gained the standing it seems to have in analytic philosophy's discussions of consciousness (and I notice that even leading names like Chalmers cite it as if it said something important). To my mind, it tells us nothing useful at all about the question. This 'insight' is, quite simply, philosophically vacuous.

DA


2009-05-14
The 'Explanatory Gap'
Reply to Derek Allan
Derek, you say "[Nagel's] 'insight' is, quite simply, philosophically vacuous."

I'm afraid that reaction seems to me philosophically naive. Philosophy is not a hard science, it leans heavily on subtle and complex semantics, so what is useful and meaningful to some will often be useless and meaningless to others. To say that something means nothing to you is perfectly valid, but you go further.

As I've suggested before, you apparently have some kind of grudge against analytic philosophy, and as a philosopher yourself, I feel that your approach throughout this discussion has been quite unprofessional, using a lack of appreciation of another branch of your subject as a blunt object with which to attack it. But those who do appreciate a thing are never impressed by the fulminations of those who do not. Even if, for some of us, Nagel's insight does, indeed, say more about analytic philosophy at a particular stage of its development than about consciousness, that's no justification for such blatant disrespect between fellow professionals. I believe you're probably wasting your time here, though I can't be sure, because I don't know what you're hoping to achieve.

2009-05-14
The 'Explanatory Gap'
Reply to Stevan Harnad
AT:  "Why, exactly, do you believe that the brain states that constitute our feelings can't ever be explained?"
SH: "Because in every attempt to explain the functional role of feeling, feeling turns out to be functionally superfluous (except iftelekinetic dualism is true, and feelings have causal power -- but it isn't, and they don't).  ..... I hereby make the same challenge for "explanations" of the functional or causal role of feeling: Tell me what it is, and I will show it is functionally superfluous on its own terms." 


Stevan, I have the feeling that the very way in which you propose the notion of a feeling-function divide implicitly precludes any possibility of a causal role for feeling. Because of this feeling on my part, I am writing this response to you. Would you claim that this feeling on my part plays no causal role in my typing the post that you are now reading?


.. AT 

2009-05-14
The 'Explanatory Gap'
Reply to Colin Hales

GAP INTACT UNTIL FURTHER NOTICE...

CH: "Wow. I post a brief aside and I am sucked into the explanatory gap!" 

Well, "The Explanatory Gapis the theme of this thread...

CH: "Empirical corroboration of...  predictions  [from Laws of Nature (LON)] puts a scientist in a state of feeling that is scientific observation..." 

So does empirical falsification of predictions from LON. So does just about everything else we say and do whilst awake and compos mentis...

CH: "LON... are (statistical) descriptions... (predictive) of how the natural world/scientist combined system feels to the scientist... in the act of scientific observation..." 

Translation: "Making a 'scientific observation' and making and understanding a scientific explanation feel like something, and those feelings are tightly correlated with the data of the observation and the explanation."

But we already knew that. We are now talking about explaining how and why making an observation, and making and understanding an explanation -- and just about everything else we do whilst alive, awake, and compos mentis --  feels like something and correlates tightly with what is going on in the world.

You are not touching the question of how and why at all. You are just reformulating what you take to be the nature of scientific observation and scientific explanation (and presupposing feeling as somehow part of the package). In other words, you are, I'm afraid, begging the question (underlying this topic thread, which is about the explanatory gap), completely.

CH: "There is nothing to a brain but (a) nucleons and (b) electrons and (c) space..." 

Fine. Now how and why do they sometimes generate feeling? 

CH: "Now the meat:... ALL of the descriptions of particles and fields and forces [were] constructed by scientists inside the described system, made of it, using ‘feeling’..." 

"Using" feeling, or whilst feeling? This is where you beg the question, by presupposing (without explanation) that feeling is causal, rather than just correlated with brain processes that are causal (and mysteriously generate correlated feelings too).

(Keep it simple, Colin. Your complicated and somewhat idiosyncratic way of putting things is fooling you into thinking you are making inroads on the explanatory gap, when you are not.)

CH: "LON are constructed presupposing the existence of the scientist and the ability (feeling) that is scientific observation. The scientist is implicitly built into the LON..."

You said that already:  Now, how/why are scientists' (and laymens') observations and explanations felt rather than just brain-functed?"

CH: "NONE of the above LON predict the existence of the feeling that is scientific observation...All presuppose both..."

Quite right. And that is the explanatory gap: Now let's hear how you propose to bridge it...

CH: "[W]e have not even begun to describe the universe in the fashion needed to predict a scientific observer of the kind we are, who sees the observation mechanism behaving [lawfully]...

Indeed; but your point is...?

CH: "The universe is NOT made of atoms or molecules or cells or subatomic particles. These are the things we perceive it to be made of when we look (feel it) as scientists..."

We feel when we do things; scientists do too. But we knew that. (I'm not sure whether you are also telling us that current scientific theory is wrong, and if so, why; but I am pretty sure you are not making any inroads on the explanatory gap: just re-describing it.)

Or perhaps you are alluding here to the fact that although feelings are correlated with the way things are in the world, they are nevertheless incommensurable with them (so it is erroneous to think of feelings as somehow "resembling" the things that correlate with the feelings: red with felt-red, round with felt-round, etc.). -- That's true too, but likewise does not help to bridge the explanatory gap; it's part of the gap.

CH: "What perspective must I adopt on the universe such that electromagnetism behaving in certain specific ways (like a brain) makes it acquire a 1st person perspective (from the point of view of BEING the electromagnetic fields that ARE the brain), when elsewhere in the body (such as in the peripheral nerves) it fails to do that?..."

Translation: "What is the explanation of how and why (some) brain function is felt, whereas (say) kidney function is not?"

That's the question, alright: But what's the answer? 

(The equivocation on "perspectives" won't help; it just milks the mystery. And the fact that you are focussing on scientific observations and scientific explanations about what there is in the world is not relevant; the same problem would be there if you were just focusing on a layman's "ouch.")

CH: "This rather awkward non-explanation of ‘feeling’ is as far as I need go for now. What the above tells me is that I can blather on forever about LON_X and I will NEVER leap the explanatory gap. It is a-priori meaningless and any expectation that it can is misguided. This does not mean the gap cannot be leapt. It means we haven’t leapt it yet."

OK, I'll wait till you've leapt it, or at least give a principled account of how it could be leapt...

CH: "To leap the explanatory gap is to construct descriptions... in such a way as to show how an observer might function. I know I have the right... descriptions [when they] start to produce observations consistent with [Laws of Nature] such that it reveals itself as the brain material of the (scientific) observer."

This unfortunately sounds as if it is going in circles, without substantive content, just a hope.

 "Consistent with" just means "correlated with" here, and the gap is about causation...

-- SH




2009-05-14
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "I have the feeling that the very way in which you propose the notion of a feeling-function divide implicitly precludes any possibility of a causal role for feeling."
Your feeling may well be right -- but please don't blame the messenger! It's the truth (or falsity) of the message that matters, not whether one feels it's true or false.
AT: Because of this feeling on my part, I am writing this response to you. Would you claim that this feeling on my part plays no causal role in my typing the post that you are now reading?
I am pretty sure that you feel that you posted this message because you felt like it, and not because you were impelled to by some unfelt force. I am not sure you are right about that, though. Are you? If so, please explain how and why... That way we'll be surer we're not just trading feelings...




-- SH

2009-05-15
The 'Explanatory Gap'
Hi Robin

The problem with your reply, as I see it, is that it criticizes me, not my argument.  I am here for philosophical exchanges not ad hominem stuff.

The post of mine you are responding to set out a specific criticism of the Nagel's 'insight' (or your interpretation of it in this case). You have not addressed that criticism. You simply say that "what is useful and meaningful to some will often be useless and meaningless to others" which, as I'm sure you will agree, is not a strong philosophical argument.

I stand by what I said, for the concrete reasons I have given: the Nagel 'insight' is philosophically vacuous. If you (or anyone) can produce an argument to show why I am wrong - a possibility I don't rule out - I would be very happy to consider it. 

DA

2009-05-15
The 'Explanatory Gap'
Reply to Derek Allan

DETECTING CATEGORY INVARIANTS FROM POSITIVE INSTANCES ALONE

DA: "the Nagel 'insight'...that 'There is something that it is like to be a conscious organism'... is in effect comparing something to itself... [This] is philosophically vacuous. If you (or anyone) can produce an argument to show why I am wrong... I would be very happy to consider it."
Several such arguments have already been made, but here's another, spelled out: You know what a (ripe) tomato looks like; you know what a (red) apple looks like; you know what blood looks like; you know what the top of a traffic light looks like; you know what a cardinal (bird, or prelate in robes) looks like; you know what a Royal Canadian Mounted Policeman looks like. If you showed pictures of all those things to a child and asked what they all had in common, he would immediately say that they were all red. That would all be possible exclusively on the basis of positive instances of red things, by detecting the (obvious)  invariant property they all shared, even though they differed from one another in every other respect.

This sampling of diverse positive instances would not be  "comparing something to itself."

The same is true in the case of sampling instances of feeling this, and that, and that.

(However, as I have also kept stressing, the category of feeling is nevertheless abnormal and and problematic, because negative instances are impossible, whereas negative instances of red (e.g., green things) are possible, and every child has sampled them too -- though you don't really need to sample them in order to notice what all the instances of red things I listed above have in common. It is true, however, that for more difficult (more "underdetermined") categories, those that are highly confusable with other, very similar-looking categories, it is necessary to sample negative instances too (i.e., members of the other categories), with error-corrective feedback; positive instances alone are not enough for detecting which are the invariant properties in such cases. The category "feeling," however, is not such a case. Even though it is a defective category, because it is uncomplemented and uncomplementable, it is not empty, and everyone (except perhaps Lewis Carroll's Tortoise) can easily detect the invariant underlying its many diverse instances to a good enough approximation from the positive instances alone.)


-- SH




2009-05-15
The 'Explanatory Gap'
Reply to Stevan Harnad
Thanks for you reply Stevan. I don't have any great problem with your example, though I imagine the child would need to know the colour red beforehand. (Though even if s/he didn't I imagine the response could at least be be 'they are all the same colour'.)

What I do have a problem with, though, is knowing what your example has to do with the Nagel 'insight' which, if I have it right, is "There is something that it is like to be a conscious organism". What is the connection between that (rather gnomic) proposition and what you call "sampling of diverse positive instances"? And how precisely does your example refute my argument that Nagel is comparing something to itself?

DA

   

2009-05-15
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: Because of this feeling on my part, I am writing this response to you. Would you claim that this feeling on my part plays no causal role in my typing the post that you are now reading?
SH: "I am pretty sure that you feel that you posted this message because you felt like it, and not because you were impelled to by some unfelt force. I am not sure you are right about that, though. Are you? If so, please explain how and why... That way we'll be surer we're not just trading feelings..."


I feel that I decided to post the message because I felt like it, but at the same time I feel that my subsequent posting was impelled and executed by unfelt neuronal mechanisms. I also feel that I might be wrong about all of this. You apparently feel that my feeling that I wanted to post the message played no causal role in the chain of brain events leading to the posting. May I assume, Stevan, that even though you feel that my feeling played no causal role in my posting, you also feel that your feeling about this might be wrong? 


How do we decide? I think/feel that the unfelt forces (biological mechanisms) that impelled and executed my posting had to be selected by my brain on the basis of my prior conscious/felt representation of the salient aspects of my personal world in this perceived forum. If I were unconscious (without feeling) I would be unable to post! I have shown how a biologically credible system of egocentric brain mechanisms might constitute the brain state that is the feeling causing the selection of the unfelt biological processes which execute the posting. Can you show the brain mechanisms that can do a similar selection without an egocentric representation of the salient world?


.. AT





2009-05-15
The 'Explanatory Gap'
Reply to Derek Allan
RF: Even if, for some of us, Nagel's insight does, indeed, say more about analytic philosophy at a particular stage of its development than about consciousness, that's no justification for such blatant disrespect between fellow professionals.

DA: The problem with your reply, as I see it, is that it criticizes me, not my argument.  I am here for philosophical exchanges not ad hominem stuff.
I don't particularly wish to spend more time defending either Nagel or my interpretation. Perhaps I'll consult Margaret Boden's wonderful history of cognitive science and philosophy, Mind As Machine, on the significance of Nagel's 1974 paper within these disciplines, because, although I remember feeling it was very important then, it now seems so obvious as to be almost, as you put it, philosophically vacuous. (Though I also feel that way about Wittgenstein's later concept of meaning, despite the fact that it's not yet universally accepted.) In my last post I was more concerned about your apparent attitude, but I can now see there's no point in pursuing that issue either so I'll leave it at that.

2009-05-15
The 'Explanatory Gap'
Reply to Derek Allan

KNOWING SOMETHING WHEN YOU FEEL IT

DA: "[No] problem with your example [of a child recognizing the category red from positive instances alone]... [But]... how precisely does [this] refute my argument that...Nagel['s] ''There is something that it is like to be a conscious organism"... is comparing something to itself?"
"Red" is a category; "feeling" is a category. What red looks (feels) like is a recognizable category; so what feeling feels like is likewise a recognizable category. We know it when we see (feel) it, and we know it on the basis of positive instances alone (which does not mean "comparing something to itself").

And that's all Nagel meant. That we all feel, that we all know what that is and what it means, and that we all know it when it is happening. 

(Of course, the only thing we feel is our own feelings, so those are the only feelings about which we have cartesian certainty, when they are actually being felt [sentio ergo sentitur], whereas about the feelings of other creatures we can only guess. I'd have to be the other creature -- say, Nagel's bat -- in order to know for sure that it [i.e., I] feels, and also to know what it feels, i.e., what that feeling feels like. [It might feel quite different from anything I am currently able to feel, being me.])

That, by the way, is all I want to exegesize and defend in Tom Nagel's viewpoint. The rest of the hermeneutics of "viewpoints" is not (in my view) all that relevant, insofar as the explanatory gap (on which Nagel is unaccountably an optimist!) is concerned. Viewpoint is just one of the many manifestations of consciousness and its countless synonyms and paranyms that one can single out and hermeneuticize without making any real inroads on the explanatory gap itself.

And that is yet another reason why I insist on sticking to straight talk about feeling rather than riding off in all directions with paranyms: A privileged "viewpoint" is already implicit in feeling, since the only one that can feel a feeling is the feeler. Anything else is just guesswork -- but guesswork "grounded" in your own feelings (if you feel at all). Otherwise [attention Colin Hales!] it is just "functing"... 

Here, to jog everyone's memory, is a partial list of these soothingly distracting euphemisms, with the invitation to add your own particular favorites (and then forget about them):
consciousness, awareness, qualia, subjective states, conscious states, mental states, phenomenal states, qualitative states, intentional states, intentionality, subjectivity, mentality, private states, 1st-person states, contentful states, reflexive states, representational states, sentient states, experiential states, reflexivity, self-awareness, self-consciousness, sentience, raw feels, experience, soul, spirit, mind..., viewpoint, ...

-- SH





2009-05-15
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Red" is a category; "feeling" is a category. What red looks (feels) like is a recognizable category; so what feeling feels like is likewise a recognizable category. We know it when we see (feel) it, and we know it on the basis of positive instances alone (which does not mean "comparing something to itself").  And that's all Nagel meant. That we all feel, that we all know what that is and what it means, and that we all know it when it is happening.

Well, this now seems to be the third version of Nagel's revelation that I've had put to me. (The differing interpretations alone make me wonder: if it's so clear, why do I keep getting different versions of what it means...?)

But I'm afraid this version is no more convincing that the others, Stevan. I don't know what all the 'category' stuff is meant to establish. (Who decides what is a category anyway, and how is a 'category' defined? And how does all that have any bearing on our problem?) But the nub of your argument seems to boil down to "we all feel and we just know what that means.. etc" . But that just lands us right back where we were. 'Feels' in this context obviously means much the same as 'experiences' and 'be conscious of' (as I pointed out to Robin).  It doesn't give us any leverage on the idea of consciousness at all i.e. it's not an explanation, it's simply an approximate synonym.

If Nagel's revelation is nothing more than a claim that consciousness is feeling, then we truly are in the land of the philosophically vacuous!

DA.

2009-05-15
The 'Explanatory Gap'
Reply to Arnold Trehub

ON UNFELT EGOCENTRISM


AT: "May I assume, Stevan, that even though you feel that my feeling played no causal role in my posting, you also feel that your feeling about this might be wrong?" 
Sure. (I might be wrong about anything except the cogito and 2+2=4.) Telekinetic Dualism could be true. But I wouldn't count on it...
AT: If I were...(without feeling) I would be unable to post! 
I missed the part about how and why there cannot be posting without feeling: Please explain (it's the explanatory gap).

And whilst you're at it, please also explain how and why it is that your brain generates the feeling that you feel like posting (as well as generating the posting, for whatever reasons you posted it), rather than your brain just generating the posting (for whatever reasons you posted it)? 
AT: "I have shown how a biologically credible system of egocentric brain mechanisms might constitute the brain state that is the feeling causing the selection of the unfelt biological processes which execute the posting. Can you show the brain mechanisms that can do a similar selection without an egocentric representation of the salient world?"
You neglected to mention how and why the egocentric brain mechanism was felt rather than just functed...


-- SH

2009-05-15
The 'Explanatory Gap'
Reply to Derek Allan

DA:  "'Feels' in this context obviously means much the same as 'experiences' and 'be conscious of'....  It doesn't give us any leverage on the idea of consciousness at all i.e. it's not an explanation..."
Glad you got the point, at last. (The "hard" problem of consciousness is to explain how and why we feel. There is no such explanation. Unlike Tom Nagel, I also think this explanatory gap cannot be closed, and I've stated many times why: the incommensurability of feeling and function, despite the correlation; the functional superfluousness of feeling in a functional explanation of the brain's performance capacity; the exhaustiveness of the four fundamental forces, leaving no room or evidence for a fifth force; hence the falsity of telekinetic dualism.)

Now, what's your point, Derek? Is it just nonspecific animus against what you keep calling "analytic philosophy"? Or do you actually have a substantive point to make about the explanatory gap?

-- SH



2009-05-16
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: What red looks (feels) like is a recognizable category; so what feeling feels like is likewise a recognizable category. We know it when we see (feel) it...
But do we? I don't believe that we feel feeling, or perceive perception, or are aware of being conscious. We think that we feel, but that's rather a different thing. Shouldn't we try to understand that first, then deal with what remains, if anything?

2009-05-16
The 'Explanatory Gap'
Reply to Stevan Harnad
Hi Steve

SH: Glad you got the point, at last

I've been making the point in my last post for quite some time now...

SH: "The "hard" problem of consciousness is to explain how and why we feel.

I would have thought the 'hard problem' of consciousness (is there an 'easy' problem, by the way?) is, above all, to explain what consciousness -  or (human) feeling - is. (Which the Nagel revelation doesn't even begin to do.) 

SH: "Do you actually have a substantive point to make about the explanatory gap". 

To reply to this I would need to repeat what I said in numerous earlier posts.  I invite you to read them, Steve.

In essence my point was that the phrase 'explanatory gap' is very arguably a misnomer. It implies, as I said earlier, that one is on the right track but just hasn't managed to make the final step. But it is eminently possible (and the Nagel debacle only reinforces my thinking in this regard) that one (ie the analytic philosophy's approach to this question) is on the completely wrong track. In this case, explanatory 'gap' is a kind of subtle self-flattery ("We are really getting somewhere, but, gee, we haven't quite made it yet").

I hope all that is plain enough?

DA

2009-05-16
The 'Explanatory Gap'

ON WHAT IT FEELS LIKE TO BELIEVE, THINK, AND KNOW


RF: "I don't believe that we feel feeling... We think that we feel"
When I am feeling something (which is most of the time when I am awake), I don't think I feel, I know I feel, if I know anything at all! 

I think Descartes is with me on that one, despite his unfortunate choice of "cogito" for his cogito. (There is indeed something it feels like to think something; there's also something it feels like to think something is true, and even something it feels like to think you know something for sure. But -- again thanks to Descartes -- only in two cases are we actually justified in feeling that we know something for sure: one is the law of noncontradiction -- and everything that follows from anything else on pain of contradiction, hence necessity -- and the other is the fact that we are feeling, when we feel. That is a matter of certainty, if anything is.)


-- SH



2009-05-16
The 'Explanatory Gap'
Reply to Derek Allan
DA: "[I]s there an 'easy' problem, by the way?"
Sure, all of ordinary science, including all of cognitive science, including brain science. There's only one hard problem, and that's how and why we feel. (QM might have another hard problem, with its own duality puzzles, but I don't think it's as hard, or hard in the same way.)
DA: "I would have thought the 'hard problem'... is... to explain what... feeling -is."
No, I think we all have as good an idea of what feeling is as we are ever likely to get of what anything is: The hard problem is explaining how and why we feel. (But if you want to wrap the explanation of the causal origins and consequences of something into what you mean by explaining what it is, then, yes, that is the hard problem after all.)
DA: "[M]y point was that... one... is on the completely wrong track... I hope all that is plain enough?"
Only plain enough to reveal that you are unfortunately not making any substantive point at all...


-- SH

2009-05-16
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "we all have as good an idea of what feeling is as we are ever likely to get of what anything is"

But I can write you pages on my idea of what a bird, or a planet, or a motor car is.  But I am reduced to mere babblings when it comes to saying what consciousness is. ("Consciousness" is a more relevant word than "feeling" to this debate,but the same would apply there too.)  And if Nagel's "insight" is the best analytic philosophy can come up with, then I suggest, for the reasons I have given, that it is reduced to mere babblings also.

A propos, is there anyone in the analytic field who is considered to have come up with a better formulation that Nagel's?  I feel we have squeezed that rather disappointing orange dry. Or - perish the thought - is Nagel the best we can hope for?

DA


2009-05-16
The 'Explanatory Gap'
Reply to Jamie Wallace
DA: "I would have thought the 'hard problem' of consciousness (is there an 'easy' problem, by the way?) is, above all, to explain what consciousness -  or (human) feeling - is. (Which the Nagel revelation doesn't even begin to do.)"  


I agree with Derek Allen that we have to explain what feeling is before we can explain what feeling might cause. I wonder if Stevan and Derek would agree that feeling is a particular state of the brain. If we can agree on this, then perhaps we can discuss what particular state of the brain might constitute feeling. From there we might make progress on the how and why of feeling.

.. AT 




2009-05-16
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "I wonder if Stevan and Derek would agree that feeling is a particular state of the brain. If we can agree on this, then perhaps we can discuss what particular state of the brain might constitute feeling."

I think that sounds prima facie like something one could agree to fairly readily.  But there's a problem.  One would probably be fairly safe in saying that "feeling" -  or let's say particular states of consciousness - are accompanied by particular states of the brain.  But to say they are "of" the brain makes implicit assumptions about the possibility of a (neuro)scientific explanation of consciousness - an assumption that your next sentence, Arnold, makes more explicitly (with 'constitute').

I don't rule out such explanations (though I am extremely sceptical that they will ever emerge) but the first step in any discussion of consciousness, to my mind, is to try to explain/describe what we mean by the term. (After all, how could we ever explain X if we don't even know what we mean by X?)  Just equating it with 'feeling' is, to my mind, quite inadequate. "Feeling' is simply a rough synonym.  And Nagels' attempt is no less inadequate. I freely admit that I have no good description; but that, I think, is not a bad starting point. It's often salutary in philosophy to begin by acknowledging what one doesn't know. One of the things that bothers me about so much of what I read in contemporary discussions of consciousness is an unwillingness to do this. So there is often a kind of philosophical hubris about what is surely one of the most difficult and mysterious questions in all human thought.

DA




2009-05-16
The 'Explanatory Gap'
Reply to Arnold Trehub

OF COURSE THE BRAIN'S THE CULPRIT: BUT HOW, AND WHY?


AT: "If we can agree... that feeling is a particular state of the brain... then... we can discuss what... state of the brain might constitute feeling [and] make progress on the how and why of feeling..."
"Constitutes" is a bit of a weasel word. Is feeling a cause of, an effect of, or the same thing as a brain state or property? Those are all the questions around which the feeling/function problem has always revolved: "constitutes" simply conflates these questions without answering them. (John Searle used to try the same trick by saying "caused-by-and-realized-in," really fast. It doesn't help. The questions are still begged.)

But I have no problem at all with agreeing that brain states somehow "constitute" feeling. Of course they do! I am not a spiritualist. The "hard" problem, alas, is explaining how and why they do. 

Bland (and blind) agreement on the fact that the brain must be the culprit does not give us a clue of a clue as to how and why it committed the crime!


-- SH

2009-05-16
The 'Explanatory Gap'
Reply to Stevan Harnad

SH: But I have no problem at all with agreeing that brain states somehow "constitute" feeling. Of course they do! I am not a spiritualist.

But one does not have to be a "spiritualist" (whatever that is exactly) to have difficulty with that formulation.  If one thinks that the brain "constitutes" consciousness, one seems committed to the view that consciousness is, by its nature, a physical  process. It may well be accompanied by, and enabled by, a physical process (indeed, presumably it must be) but beyond that we must surely admit we know absolutely nothing about its nature.

Once again, a description of what we mean by consciousness would be a good starting point...

DA

PS  Is that a common analytic viewpoint - that if one doesn't think that the brain "constitutes" consciousness, one is a "spiritualist'?



DA


2009-05-16
The 'Explanatory Gap'
Reply to Derek Allan
DA: "Is that a common analytic viewpoint...?"
Derek, I regret to have to say that until and unless you can stop shadow-boxing with this "analytic" bugaboo of your own invention and instead say something of substance about something, there is simple nothing more that anyone can either say about or reply to your postings (at least nothing more that this non-analytic, non-philosopher can say).

-- SH

2009-05-17
The 'Explanatory Gap'
Reply to Jamie Wallace
Stevan,

I think you are wrongly assuming that the "problem" generated by uncomplemented categories indicates a problem which exists outside of the grammar in which those categories are defined.  In other words, the only problem here is the desire to take the notion of uncomplemented categories seriously.

You write:

"But the category "feeling" is one of a family of special cases (each of them causing conceptual and philosophical problems) because they are "uncomplemented categories" -- a kind of "poverty of the stimulus" problem arising from the fact that they are based (and can only be based) exclusively on positive instances: In contrast, the category "redness" is perfectly well-complemented: I can sample what it feels like to see red things and non-red things, no problem. But not so with the category "feeling": I can sample what it feels like to feel: I do that every time I feel anything."

To feel is to feel some X, so that any knowledge of feeling is knowledge of feeling some X.  Knowledge of feeling cannot be separated from knowledge of X.  There is thus no uncomplemented (and no "Cartesian") knowledge of feeling, just as their is no uncomplemented (and no Cartesian) knowledge of thinking.

Feeling is not an object of knowledge, but rather a way of knowing.  Thus there is no uncomplemented category to worry about here.  The problem you have been discussing is not a "hard problem" with which philosophy or science must reckon, but a simple problem with your grammar, with your categorizing "feelings" as objects of knowledge, and not ways of knowing. 

This error underlies your entire discussion, explaining your incoherent distinctions between Cartesian and non-Cartesian knowing and between functing and feeling.  It also explains the contradiction between your allegiance to physicalism and your insistance that feelings are somehow non-causal.  (Consider, if this contradiction is not already clear, that the term "physical" implies functional/causal congruity with respect to predictive models, and that this is a property which you deny feelings.  Consider also the contradiction implied by the fact that your argument here is motivated by the existence of feelings; for if feelings cannot causally influence behavior, how could they motivate it?)  Once the original category error is corrected, all of these problems disappear. 

Of course, you could try to argue that feelings really are uncomplemented categories.  Perhaps you wish to claim that one can feel without feeling some X, or that one could know that one was feeling without knowing that one was feeling some X.  But I don't see how you could support such a position.  So far, the only support you have provided is an appeal to common knowledge (as though it was just obvious that feeling could be separated from feeling some X) and your claim that anybody who denies this fact is disingenuously begging the question.  These tactics are no more persuasive than the theistic arguments they resemble.  The bottom line is, your claim invites contradiction without compensation, and I think the error is easy enough to correct.

2009-05-17
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "If we can agree... that feeling is a particular state of the brain... then... we can discuss what... state of the brain might constitute feeling [and] make progress on the how and why of feeling..."
SH: "But I have no problem at all with agreeing that brain states somehow "constitute" feeling. Of course they do! I am not a spiritualist. The "hard" problem, alas, is explaining how and why they do."
 
But,Stevan, you have claimed that explaining how and why is not merely hard, but impossible because feelings have no causal consequences. It seems to me that you contradict your own argument when you acknowledge that feelings are states of the brain, because states of the brain are organized biophysical mechanisms with structural and causal dynamic properties. if one grants that feelings are constituted by particular brain states one is not justified in claiming that feelings cannot have causal consequences.


.. AT  













2009-05-17
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "...you have claimed that explaining how and why is not merely hard, but impossible because feelings have no causal consequences..."
I have. And I've given my reasons for concluding that (incommensurability, the exhaustive quota of fundamental forces, the falsity of telekinetic dualism, and the sufficiency of functing for causally explaining all functing, hence the superfluousness and inexplicability of feeling).

But if you find my conclusion wrong, I'd be happy to hear how and why. 
AT: "It seems to me that you contradict your own argument when you acknowledge that feelings are states of the brain..."
There's no contradiction whatsoever. My argument is epistemic rather than ontic (except for the innocuous bit about the exhaustiveness of the four known forces). I am not saying that feelings are and are not caused by the brain. I am saying we cannot explain how or why. The explanatory gap is an epistemic gap, not an ontic gap. It's a shortfall in causal explanation, which seems to work successfully for everything else except feeling.

And please distinguish (1) the problem of explaining how brain function causes feeling (the "how" in the how/why) from (2) the even bigger problem that feelings cannot themselves be causes (the "why" in the how/why). 

In the first case there is (almost certainly) causation (but no causal explanation). In the second case there is not even causation.
AT:  "...if one grants that feelings are constituted by particular brain states one is not justified in claiming that feelings cannot have causal consequences."
It makes little difference what I "grant" about how the brain causes feelings, if neither I nor anyone else can explain how or why. But the question of the causal consequences of feelings (as opposed to the causal consequences of the functing that causes the feelings) is, in my view, the more perplexing side of the feeling/function problem.


-- SH



2009-05-17
The 'Explanatory Gap'

BELIEVING IS FEELING: CORRELATION, CAUSATION AND INFORMATION


JS: "you are wrongly assuming that the "problem" generated by uncomplemented categories... exists outside of the grammar in which those categories are defined..."

I do not see that anything I have said has anything to do with grammar! I am not speaking of grammatical categories but sensorimotor and verbal categories: kinds of things (objects, events, actions, states, properties) that we are able to recognize, call by their names, and to an extent describe. Many of these categories -- especially the first ones we acquire -- are not derived from definitions or descriptions, but grounded in sensorimotor experience (which also happens to be felt). (And those categories that we do acquire via definition are recombinations of categories we have acquired through sensorimotor experience, likewise felt. It also feels like something to understand what a word means.)

JS: "To feel is to feel some X, so that any knowledge of feeling is knowledge of feeling some X.  Knowledge of feeling cannot be separated from knowledge of X."

To feel something is to feel something. We all know that. The way we know is by feeling this (e.g., a headache) and by feeling that (e.g., a toothache), and noticing that they feel different, but that they both feel like something. We all know that too. There is no point mystifying it. (And "something" is a perfectly serviceable -- if rather abstract -- generic category too, though it too might have some complementation problems of its own!)

Feeling a headache is something we can recognize and call by its name. So is feeling a toothache. And so is generic feeling; that means feeling something; and feeling something is something that all feelings of X or Y or Z have in common.

JS: "There is thus no uncomplemented (and no "Cartesian") knowledge of feeling, just as [there] is no uncomplemented (and no Cartesian) knowledge of thinking..."

One thing at a time. Feeling this (e.g., a headache) is a complemented category. We can all recognize and call it by its name. Feeling that, a toothache (part of the complement of feeling a headache), is not feeling a headache. Hence the category "what it feels like to feel a headache" (aka "what a headache feels like") is a perfectly well-complemented category.

In contrast, the category "feeling something" (where "something" can be anything at all) is likewise a category ("what it feels like to feel anything at all, be it headache or toothache) -- a category that we can all recognize and call by its name. 

But "feeling something" is not a complemented category, because we do not and cannot know what it feels like to feel nothing at all. (We can know what it feels like to feel this and not-that, but that's not the complement of feeling itself, but only the complement of feeling this, or that.)

So neither the recognizability and identifiability of the category "feeling (something)" nor its uncomplementedness is in doubt. We do have the category even though we can only sample positive instances of it. 

We have other categories based on positive instances alone -- for example, what it feels like to be a bachelor, if one is and always has been a bachelor. There we flesh out the complement, and the invariant features of what it feels like to be a bachelor, from guessing what it would feel like to be married. Of course, once one gets married, one may discover that being married does not feel like what one had expected at all -- in which case one did not fully know what it feels like to be a bachelor either, having only experienced positive instances of it. 

The difference in the case of the category "feeling" itself is that its complement cannot be filled in by proxy hypothesis or analogy, as in the case of imagining what it would feel like to be married, because in the case of feeling, the category "what it would feel like not to feel" is both empty and self-contradictory. So we may be off (somewhat) about what, exactly, it feels like to feel, in the way we could be off about what it feels like to be a bachelor; and that may (and indeed does) create conceptual problems. But it does not mean the category "what it feels like to feel (something)" is either empty or incoherent; just a bit pathological, cognitively.

You also seem to be denying that I can have cartesian certainty that I am feeling ("[t]here is no... "Cartesian"... knowledge of feeling") when I'm feeling (sentio ergo sentitur) -- and that's a rather bold denial. I wonder if you have an argument to support it? And unless I'm misunderstanding, you even seem to be tilting against the cogito itself, in its original formulation by Descartes, in claiming that "[there] is no... Cartesian... knowledge of thinking.˘

I'd say your chances are better if you just attack my notion of uncomplemented categories, rather than trying to take on Descartes too!

JS: "Feeling is not an object of knowledge, but rather a way of knowing..."

I would say feeling's the only way of knowing, since unfelt "knowledge" (as in the case of an encyclopedia, computer, or one of today's robots) is no knowledge at all. And that includes things that Freud (no philosopher) lulled us into calling "unconscious knowledge": In a feeling creature like me, there's knowledge, namely, the things I know, and know that I know, and feel that I know, whilst I'm busy feeling that I know them. All the same things. These are not cartesian (certain) knowledge; they're just beliefs I have, some of which might even be true. But all the beliefs are felt (whilst they're being believed, which of course feels like something). 

(The same data, including verbal, propositional data, implemented inside a feelingless robot, would not be beliefs or knowledge, but merely data and states, along with the functional capacity that the data and states subserve; in other words, all just functing. Even in a feeling, hence true-believer/knower like me, those of my brain states that are not being felt are not beliefs but merely functional capacity plus the [mysterious] potential to be felt, hence to become beliefs while being felt.)

I also have know-how -- sensorimotor and even cognitive skills that I am able to perform without knowing how I manage to perform them. (Most of cognition and behavior is like that. You can do it, but you have no idea how: you're waiting for cognitive science to discover how you do it, and then tell you.) Some like to call that "unconscious" or "implicit knowledge," but I think it's more accurate to say that it's the functional basis of my know-how, of my performance capacity. (It's also the explanatory target of cognitive science in general, and the Turing Test in particular.)

Another way of thinking of the "explanatory gap" is to ask why feelings accompany any of this -- whether my explicit knowledge or the exercise of my implicit know-how: Why is it all not just functed? Until that question is answered, feeling cannot be said to be a "way of knowing," but merely a passive (and apparently superfluous) correlate of some forms of know-how. (Don't forget that, functionally speaking, explicit, declarative knowledge is just a form of know-how too -- let's call it "know-that" -- a form of know-how in which we happen to be able to verbalize and describe some of the underlying functional algorithms or dynamics.)

Harnad, S. (2007) From Knowing How To Knowing That: Acquiring Categories By Word of Mouth. Presented at Kaziemierz Naturalized Epistemology Workshop (KNEW), Kaziemierz, Poland, 2 September 2007. 

JS: "The problem you have been discussing is not a "hard problem"... but a simple problem... with your categorizing "feelings" as objects of knowledge, and not ways of knowing.

I'll settle for your solution to the simple problem of how and why feeling (rather than just functing) is a way of knowing -- as soon as you explain it...

JS: "This error underlies your... incoherent distinctions between Cartesian and non-Cartesian knowing and between functing and feeling."

You've remembered to call them incoherent but you've forgotten to explain how and why... 

JS: "It also explains the contradiction between your allegiance to physicalism and your insistance that feelings are somehow non-causal."

No contradiction at all (as I've just got done explaining to Arnold Trehub). I have not said feelings both are and are-not causal. I have said that we cannot explain how or why. That's called the explanatory gap.  

JS: "the term "physical" implies functional/causal congruity with respect to predictive models, and... this is a property which you deny feelings..."

I am denying nothing except what one can only affirm if one can explain how and why (and one hasn't).  

JS: "...your argument... is motivated by the existence of feelings [but] if feelings cannot causally influence behavior, how could they motivate it?

Did I say anything about motivation? (What is motivation, anyway, apart from yet another set of feelings correlated with yet another set of functions?)

But, to answer your question: feelings can correlate with behavior if the feelings and behavior are caused by the same functing. The trouble is, we don't know how or why the brain would bother to funct feelings as well as behavior, rather than just go ahead and funct the behavior, without any sentimentaliy...

JS: "Perhaps you wish to claim that one can feel without feeling some X, or that one could know that one was feeling without knowing that one was feeling some X..."

No I don't wish to claim that, since it's not true. And why would I wish or need it to be true? (Please, before you pounce on "wish" or "need" as selt-contradicting, read again what I said above about correlates and common causes above.)

JS: "...the only support you have provided is... that feeling could be separated from feeling some X and... that [to] den[y] this... is... [to] beg... the question.  These tactics are no more persuasive than the theistic arguments they resemble..."

I think you have not understood the argument. I said that from feeling A, feeling B and feeling Z, we could abstract the invariant feeling X (where X is something, anything). And that was perfectly ordinary categorization (except that "feeling" is uncomplemented.)

And what I said was question-begging was assigning a causal role to feeling without explaining how and why.

(Theistic??? I have inferred (by abstracting the common invariant across many postings) that NA has some sort of thing about "analytic philosophers." Do you perhaps have some sort of bugaboo too -- with "theists"?)

-- SH



2009-05-18
The 'Explanatory Gap'
Reply to Stevan Harnad

Analytic or not - and the view you mention certainly seems reminscent of the scientistic approach one so often finds in analytic approaches to consciousness - my question remains:

Is that a common viewpoint - that if one doesn't think that the brain "constitutes" consciousness, one is a "spiritualist'?  (I thought spiritualists were people who held seances etc).

DA

2009-05-18
The 'Explanatory Gap'
Reply to Derek Allan

POLTERGEIST


DA: "Is that a common viewpoint - that if one doesn't think that the brain "constitutes" consciousness, one is a "spiritualist'?  (I thought spiritualists were people who held seances etc)."
(1) I'm afraid I have no idea how common the viewpoint is. What I take to be important in trying to reach a valid conclusion is the evidence and the reasoning rather than the vote-count.

(2) The common term for those who don't think the brain "constitutes" consciousness is "dualist." But I don't think "dualist" is self-explanatory. I have also referred to the position as "telekinetic dualism." And of course telekinesis, clairvoyance, teleportation and telepathy are what spiritualists believe in, and what they try to do in their seances.

(3) The link is causality: If I am ready to believe that I am using a mental force to move my arm when I feel like it, then I have much the same belief as those who believe in action-at-a-distance in space and time through "mind-over-matter." 
[As the quip goes: "Madame, we have established your profession, we are merely haggling over the price" -- or, in this case, the distance, in time and space. (This quip is sometimes attributed to Churchill, but who knows? Unspeakable quanities of hokum -- and often spiritualist hokum -- have been attributed to poor Einstein, no longer here to defend himself from his putative "sayings.")]

(4) Note that telekinetic dualism (though not under that name) is the default belief of most people, that it is a perfectly natural belief, congruent with all of our experiences and intuitions; and it is of course at the root of our belief in an immaterial, immortal soul, and thence all the rest of the supernatural, including the the afterlife, the demiurges, and the omnipotent deities. (It just happens to be untrue, although, again, no one can explain how or why, other than to point out, quite sensibly, that the brain is the only credible culprit, which it surely is.)


-- SH







2009-05-18
The 'Explanatory Gap'
Reply to Stevan Harnad


SH  "(4) Note that telekinetic dualism (though not under that name) is the default belief of most people, that it is a perfectly natural belief, congruent with all of our experiences and intuitions; and it is of course at the root of our belief in an immaterial, immortal soul, and thence all the rest of the supernatural, including the the afterlife, the demiurges, and the omnipotent deities. (It just happens to be untrue, although, again, no one can explain how or why, other than to point out, quite sensibly, that the brain is the only credible culprit, which it surely is.)"

I would have thought the default position for many people is a modest agnosticism. It is certainly mine.  The claim that 'it all depends on the brain' etc strikes me as a kind of scientistic dogmatism - certainly until someone can demonstrate clearly that consciousness can be explained in purely neuroscientific terms - which no one seems to have come within a country mile of doing yet. (Step One, as I have suggested, would be to give a satisfactory description of what it is one is trying to explain and even that, it appears, has not yet been done. Not a promising start...)

(As for the afterlife, omnipotent deities etc, I prefer to remain agnostic on those matters too. I have yet to find any irrefutable proof that they do not exist.)

DA



2009-05-18
The 'Explanatory Gap'
Reply to Derek Allan

PASCAL'S WAGER, OR "WHY I AM NOT AN AGNOSTIC"


DA: "...for the afterlife, omnipotent deities etc... I would have thought the default position for many people is a modest agnosticism."
Although this is getting distinctly silly (and drifting ever further from the "explanatory gap"), I cannot resist replying (because the connection is not altogether zero) that default agnosticism suffers from the same rational (and practical) defect as Pascal's Wager:

Pascal thought that -- given the trade-off between the grave risk of eternal damnation if Received Writ is all true and one fails to be obey, and the mild risk of a somewhat more constrained finite lifetime if it's false yet one obeys anyway -- the lesser risk should be the default option. 
This founders on the fact that there are competing claims on our obedience, from the Mosaic edicts to the Mohammedan injunctions to voodoo to the dictates of the Great Pumpkin. Is one to hew then, as in Selfridge's Pandemonium model, to whichever demon raises the ante the highest? (If so, I'll meet you and double the eternities of agony you will suffer if you don't send my temple a $1M pledge and make and send 100 copies of this letter to 100 other infidels.)

There are also links here with "flat priors" in Bayesian Inference, with the Cauchy Distribution, with Zeno's Paradox (especially Lewis Carroll's version of it), and with Dawkins's "Green-Eyed Monster," but I alas haven't the time to explain them all.
DA: "The claim that 'it all depends on the brain' etc strikes me as a kind of scientistic dogmatism... until someone can demonstrate clearly that consciousness can be explained in purely neuroscientific terms..."
Just a clarification, that the predicate "all depends on the brain" referred, yet again, only to the explanatory gap: how/why the brain causes feelings. (The eschatology was just a bonus -- though of course the brain, indeed multiple brains, are behind that too, if rather more circuitously!) 

Derek seems to think that the explanatory gap -- an epistemic gap -- somehow sanctions agnosticism about the brain; I think it just sanctions scepticism about the power of causal explanation to explain the fact of feeling. It raises no doubts whatsoever, in my mind, about the fact that feelings are caused (somehow) by the brain.



-- SH





2009-05-18
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "But, Stevan, you have claimed that explaining how and why is not merely hard, but impossible because feelings have no causal consequences. It seems to me that you contradict your own argument when you acknowledge that feelings are states of the brain, because states of the brain are organized biophysical mechanisms with structural and causal dynamic properties. if one grants that feelings are constituted by particular brain states one is not justified in claiming that feelings cannot have causal consequences."

SH: "But if you find my conclusion wrong, I'd be happy to hear how and why."

Your conclusion is wrong because you appear to be endorsing each of the following propositions:

(a) All brain states have causal consequences.
(b) Feelings are brain states.
(c) Feelings have no causal consequences.

Given (b), proposition (c) is contradicted by proposition (a).

.. AT





2009-05-18
The 'Explanatory Gap'
Reply to Arnold Trehub

MAKING COMMON CAUSE


AT: "Your conclusion is wrong because you appear to be endorsing each of the following propositions:
-- (a) All brain states have causal consequences.
-- (b) Feelings are brain states.
-- (c) Feelings have no causal consequences.
"Given (b), proposition (c) is contradicted by proposition (a)."
Here is a sure way to know that one has either cheated, trivialized, or otherwise begged the question in the way one has formulated the problem: if one's formulation would apply unproblematically and indifferently to any old brain property at all. "All brain states have causal consequences - X is a brain state - So X has causal consequences - No problem" then there is a problem with one's formulation of the problem.

The problem is that when "X" happens to be feeling, it is not at all evident what we are saying when we say "feeling is a brain state." Behavior, for example, is not a brain state, though it is caused by brain states. ("State" is a weasel-word, covertly doing double-duty here.)

So let as assume (since it is surely true) that brain states cause feelings, just as they cause behavior (even though we can explain how and why brain states cause behavior, but we cannot explain how and why they cause feelings).

Now with behavior -- which, to repeat, is not a brain state, but is caused by brain states, with no problem at all about explaining why and how it is caused -- there is also no problem with the consequences of what the brain state causes, in causing behavior. Behavior itself has its own consequences: My brain, with the help of a slippery pavement, causes me to stumble; I fall on your cake; the cake is squashed; you send me the bill.

But with feeling -- which, to repeat, is not a brain state, but is caused by brain states, inexplicably [that's the first part of the problem, and hence of the explanatory gap] -- there is indeed a problem, an even greater problem, with the consequences of what the brain causes, in causing feeling. For feeling does not have (and cannot have -- on pain of telekinetic dualism) any independent causal consequences of its own: My brain, with the help of a slippery pavement causes me to stumble (though I feel I tried everything I could to keep my balance); I fall on your cake (I feel clumsy); the cake is squashed (I feel embarrassed; you feel angry); you send me the bill. (I pay it, because I feel I should) etc.

So, to reformulate your scenario without begging the question:

-- (a) All brain states have causal consequences.
-- (b) Feelings are (unexplained) causal consequences of brain states.
-- (c) Feelings have no causal consequences: 
-- (d) What we feel to be causal consequences of feelings are really the causal consequences of the brain states that (also, inexplicably) cause the feelings.

Given (d), proposition (c) is perfectly consistent with propositions (a) and (b).

Common causes (functing) can have multiple correlated effects, and in the case of behavior (functing) and feeling, the feeling has no independent (i.e., non-telekinetic) effect, it just dangles, inexplicably.

The explanatory gap (which cannot be closed by a series of non-explanatory propositions presupposing the solution of non-existence of the "hard" problem.).

-- SH








2009-05-19
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Derek seems to think that the explanatory gap -- an epistemic gap -- somehow sanctions agnosticism about the brain"

I'm not sure how you got this from what I have written, Stevan.  I've made two separate points in this connection:

(1) I think that the term explanatory 'gap' is an exercise in subtle self-flattery - since it may in fact be an explanatory abyss - or dead-end. 'Gap', as I've said several times, suggests one is on the right track but has not quite got there. I think the 'scientific' account of consciousness may well be a road to nowhere. In any case, that possibility has at least to be acknowledged. And all the talk about a 'gap' tends to obscure it.

(2)  I am agnostic about explanations of consciousness, just as I am about god/s etc, not because of any complicated 'wager' issues (and still less because of the so-called 'gap') but simply because I confess I do not know. Frankly, I think this is the only intellectually honest position one can adopt - unless one is sure one does know and can say why.

DA

2009-05-19
The 'Explanatory Gap'
Reply to Derek Allan

'NESCIO' IS NOT A SUBSTANTIVE OPTION 

The reply to Derek is exactly the same as the reply to Arnold, but for the opposite reason:

First the reply, again: "Here is a sure way to know that one has either cheated, trivialized, or otherwise begged the question in the way one has formulated the ["hard"] problem: if one's formulation would apply unproblematically and indifferently to any old brain property at all."

Now Derek's contribution to the discussion of the problem:

DA: "I am agnostic about explanations of consciousness... [not] because of the so-called 'gap'... but simply because I confess I do not know."

This casts neuroscience's failure to explain how and why we feel with its failure to explain schizophrenia, two (unsolved) problems of an entirely different order (one "easy," the other "hard," for a number of reasons that have been repeatedly made explicit in this discussion, and that constitute the "explanatory gap.").

The trouble, again, with what Derek seems to be saying, is that it simply has no substance, one way or the other. Apart from inveighing repeatedly against the straw man of "analytic philosophy," nothing whatsoever is being said other than that consciousness has not yet been explained (and that "we need to 'define' it").

Schizophrenia will be "defined" when we know how and why the brain generates it; till then, it's enough to point to it. Ditto for consciousness (feeling). But for the latter (and not the former), principled problems of explanation have been repeatedly pointed out, very explicitly. "I confess I do not know" does not even begin to engage the question.

The following, says even less:

DA: "...the... explanatory 'gap' ... may in fact be an explanatory abyss - or dead-end... [T]hat possibility has at least to be acknowledged..."

It has been acknowledged, repeatedly, with substantive reasons. Now it's your turn to say something of substance, rather than just repeating that we need to "define" consciousness, because maybe that will make the problem of explaining it go away.

-- SH


2009-05-19
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "This casts neuroscience's failure to explain how and why we feel with its failure to explain schizophrenia, two (unsolved) problems of an entirely different order".

Well, I don't know if they are 'of a different order' or not. But there are umpteen things in the universe that are not yet explained and I don't know why one would single out schizophrenia. 

My point is simply this:  If consciousnesses has not been explained (and despite the optimistic titles of some books I see around, I don't think we are within a bull's roar of explaining it) then why not honestly admit it?  All the talk of 'explanatory gaps' is, to my mind, just a way of pulling our own legs. It implies that we know we are on the right track etc. But we may be on the completely wrong track. We could, for all we know, be like the medieval alchemists who thought they were on the brink of making gold out of base metal but were in fact just groping around in the dark.

SH
: "Now it's your turn to say something of substance, rather than just repeating that we need to "define" consciousness, because maybe that will make the problem of explaining it go away."

I have absolutely nothing 'of substance' to say if, by that, you mean explaining consciousness. I think the problem is gigantic, hugely baffling. But trying to define what we mean by it would be a useful start. Then at least we might have some vague idea of what it is we are trying to explain. The best I've seen so far is Nagel's thing about being 'like' something, which, for the reasons I've given is of no use whatsoever. 

DA


 

2009-05-20
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "But with [1] feeling -- which, to repeat, is not a brain state, but is caused by brain states, inexplicably [that's the first part of the problem, and hence of the explanatory gap] -- there is indeed a problem, an even greater problem, with the consequences of what the brain causes, in causing feeling. For [2] feeling does not have (and cannot have -- on pain of telekinetic dualism) any independent causal consequences of its own:"

[1] Your endorsement of the "explanatory gap" clearly depends on the key assumption that a feeling is not a brain state despite the fact that a feeling is caused by a brain state. In accordance with a non-dualistic view of the matter, you take feelings to be physical events. As physical events, feelings must exist somewhere in the physical universe. A legitimate question is this: If a feeling does not exist as a part of the brain of the individual having the feeling, where does it exist?

[2] If a feeling is a physical event (physical events can have causal consequences without telekinetic dualism), what is your principled explanation for the assumed inability of feelings to have causal consequences?

In your reformulation, you speak of feelings as being "unexplained" rather than "inexplicable". I have no problem with this change of stance.

.. AT 









2009-05-20
The 'Explanatory Gap'
Reply to Jamie Wallace
Does the explanatory gap require a functional explanation for how conscious experience arises from the material brain? Or, does the gap require an explanation of why we have conscious experience?

A functional explanation ignores the question of why we experience (and must simply presume that we do), in order to explain how brain function x causes phenomenal experience y. Also, a functional explanation is a form of communication, conveyed in language and/or measurements. "How does the brain cause my seeing red?" The functional explanation for this phenomenal experience will be given in terms of physical, causal processes.    

Given an explanation for how the brain's function causes phenomenal consciousness, this still leaves unexplained what the experience of phenomenal consciousness feels like. No amount of knowledge-that, or explanation, can stand in for the phenomenal sensations which those explanations are intended to explain--as Jackson's 'Mary's Room' demonstrates. But we don't have to explain the sensations; conscious experience (including sensations) is the common ground of our communications and explanations.

Consciousness is a prerequisite for communication, including the communication of functional explanations, and it is also the ground of our judging whether those explanations are adequate to describe how this sensation is caused by the brain.

However, I don't wish to imply that the argument boils down to differences of opinion over what degree of explanation is sufficient to close the gap, since I suspect that proponents of the explanatory gap would remain unsatisfied by any degree of purely functional explanation. [I'm actually interested in learning what form of explanation would satisfy proponents].

If so, the explanation that would fill the explanatory gap appears to be a non-functional, non-causal explanation to the question of why phenomenal experience occurs. I don't see that the question of why feelings exist (or occur) is any different from the question of why anything at all exists. That certainly is a hard problem.        

2009-05-20
The 'Explanatory Gap'
Reply to Arnold Trehub

REARRANGING THE ONTIC FURNITURE WON'T FILL THE EPISTEMIC GAP


AT: "Your endorsement of the "explanatory gap" clearly depends on the key assumption that a feeling is not a brain state despite the fact that a feeling is caused by a brain state." 
It doesn't really depend on that at all:

If feeling were a "brain state" rather than an (unexplained and inexplicable) effect of a brain state, then instead of an effect of the brain state being a causal dangler, the brain state itself would be a causal dangler. Either way, we are just massaging terms, but not explaining how and why we feel. That's why all of this explanatorily-empty ontological house-keeping does no good. It's a substantive explanation we want (despite the obstacles that have been itemized), not metaphysical comfort-calls without explanation.

Besides, I suggested that feeling was no more a brain state (as opposed to the effect of a brain state) than behavior is: Both feeling and acting are things our brain does rather than things our brain "is."
AT: "In accordance with a non-dualistic view of the matter, you take feelings to be physical events."
It no more helps to call feelings "physical" (or "nonphysical") than it does to call them "brain states." What we want to know is how and why we (or our brains -- makes no difference) feel, rather than just "funct". Solemnly pledging ontic allegiance to "monism" or "dualism" does not advance us by one epistemic epsilon....
AT: "As physical events, feelings must exist somewhere in the physical universe. A legitimate question is this: If a feeling does not exist as a part of the brain of the individual having the feeling, where does it exist?"
...nor does pinpointing where (or when -- or even what) we feel help to close the how/why gap one iota...
AT: "If a feeling is a physical event (physical events can have causal consequences without telekinetic dualism), what is your principled explanation for the assumed inability of feelings to have causal consequences?"
I'm not the one giving the explanations, I'm the one asking for them! And dubbing feeling "a physical event" does not answer the how/why question either.

Here is another way to put the entire feeling/function problem in such a way as to bring the problem of causality out into the open:

When I lift my finger, it feels as if I did it because I felt like it. In reality, my brain did two things: (1) it caused me to feel like lifting my finger and (2) it caused me to lift my finger. The "hard" question about causality, the one that creates the explanatory gap, is: how, and especially why, did my brain bother with (1) at all, since it is obviously causally superfluous for (2), an effectless (ineffectual) correlate (except if telekinetic dualism is true, which it's not).

In other words, if telekinetic dualism (i.e., the 5th-force causal power of feelings) is false, then the burden for "principled explanation" is on those who wish to claim that feelings do have causal consequences: how? why?
AT: "In your reformulation, you speak of feelings as being 'unexplained' rather than 'inexplicable'. I have no problem with this change of stance."  
Again, if we agree that there is no explanation so far of how and why we feel rather than just funct, the burden is on those who think that there ever can be an explanation, in light of the causal obstacles (unlike anything else under the sun) that any explanation would have to surmount. Preferring "unexplained" to "inexplicable" does not help; it just gives the soothing feeling (without justification) that the mind/body (feeling/function) problem is just another problem science has not yet solved; no reason to expect it won't get round to it eventually...


-- SH




2009-05-20
The 'Explanatory Gap'
Reply to Luke Culpitt
LC: "...the explanation that would fill the explanatory gap appears to be a non-functional, non-causal explanation to the question of why [feeling] occurs..."
That would be a terrific way to keep begging the question indefinitely, since the question of why we (or our brains) feel is a functional, causal question, just as the question of why we (or our brains) act is. 

-- SH


2009-05-21
The 'Explanatory Gap'
Reply to Stevan Harnad
What is slightly disturbing in this exchange is the implicit assumption that the "physical" is straightforward and explicable whilst the mental is difficult to define and currently inexplicable.  For instance,  the contributors to this post and to this thread in general all seem to agree that a succession of brain states is something that could be easily understood, being physical, although they disagree about how far such a succession of states might explain experience.

This characterisation of the problem seems at best an oversimplification.  Suppose we could explain all experience in terms of some kind of functionalism, we would then need to understand the nature of a "function".  At first inspection this seems obvious - a function is an operation on a state to produce another state - so functions cause changes. Now we get into trouble. One of the most difficult problems in the philosophy of physics is the notion of "change".  No-one understands how one physical state gives rise to another.  This is reviewed in the Stanford Encyclopedia of Philosophy article on "Change" and there can be little doubt that the current state of knowledge about physical time and change is limited at best. According to classical physics (ie: relativistic physics) the universe is construed as a "block universe" in which, to quote Hermann Weyl (1920):

"Only the consciousness that passes on in one portion of this world
experiences the detached piece which comes to meet it and passes
behind it, as history, that is, as a process that is going forward
in time and takes place in space." Hermann Weyl (1920) .

(See also Petkov, V. (2002). Montreal Inter-University Seminar on the History and Philosophy of Science.
http://alcor.concordia.ca/~vpetkov/absolute.html )

So, if it is conceded that conscious experience is purely functional then classical physicalism needs a conscious observer outside of this purely functional world to observe the functional observer.

OK, classical physicalism does not give us the simple answers we wanted so lets get quantum mechanical. Does quantum physics give us simple functions that are independent of any conscious observers? Quantum mechanics does not bring us any nearer to explaining change and also introduces other observer related problems. A major problem in quantum mechanics is the "preferred basis problem" and this has caused considerable debate in the past decade. Technically this is stated as:

"For a given quantum state, what determines the orthogonal set of projectors to which the Born rule assigns the probabilities?" (Janssen 2008)

Sadly the probabilities become horribly mixed up with the state of the observing system and to avoid this problem physicists such as Zurek (2003) usually tackle the problem by simply assuming that there is a "preferred basis" that appears miraculously as a "given".

Given the current state of our physical knowledge the invocation of "physicalism" as an "easy" explanation of conscious experience is unwarranted. Instead we might guess that the problem of conscious experience is somehow linked to the problem of time and change in physics.


Further discussion

See Materialists should read this first


References

Janssen, Hanneke (2008) Reconstructing Reality: Environment-Induced Decoherence, the Measurement Problem, and the Emergence of Definiteness in Quantum Mechanics.
http://philsci-archive.pitt.edu/archive/00004224/

Weyl, H (1920) "Space, Time, Matter". Dover Edition

Zurek, W.H. (2003). Decoherence, einselection and the quantum origins of the classical. Rev. Mod. Phys. 75, 715 (2003)
http://arxiv.org/abs/quant-ph/0105127






2009-05-21
The 'Explanatory Gap'

IMPORTED QUANTUM PUZZLES DON'T HELP, THEY JUST DISTRACT


JWKMM: "[There] is the implicit assumption that the 'physical' is straightforward and explicable whilst the mental [feeling] is difficult to define and currently inexplicable."

No assumptions. The problem is explaining how and why we feel rather than just "funct." The problem is neither solved nor dissolved by pointing to putative problems in physical (i.e., functional) explanation.

JWKMM: "...the contributors.. all seem to agree that a succession of brain states is something that could be easily understood, being physical, although they disagree about how far such a succession of states might explain experience [feeling'."

The problem is explaining how and why people feel, not with explaining how and why apples fall.

JWKMM: "Suppose we could explain all experience [feeling[ in terms of some kind of functionalism, we would then need to understand the nature of a 'function'."  

We understand function well enough. And to suppose that feeling can be explained functionally is to suppose an answer to a question that some of us are arguing is unanswerable. That is begging the question.

JWKMM: "One of the most difficult problems in the philosophy of physics is the notion of 'change'.  No-one understands how one physical state gives rise to another."  

It's understood to a good enough approximation to make functional explanation unproblematic (everywhere except possibly in QM). But it does not even begin to explain how and why we feel.

JWKMM: "So, if it is conceded that conscious experience [feeling] is purely functional then classical physicalism needs a conscious [feeling] observer outside of this purely functional world to observe the functional observer."

A moment ago we were to "suppose" (against all reasons adduced) that feeling could be explained functionally. Now we are to "concede" it, and the result is suppose to be that we need a feeling observer of function. (This strikes me as QM-puzzle-motivated gobbledy-gook, I'm afraid.)

JWKMM: "Quantum mechanics does not bring us any nearer to explaining change." 

So let's stay far away from quantum mechanics and focus on the explanatory gap, which is about explaining how and why we feel rather than just funct, like everything else (including QM).

JWKMM: "...the problem of conscious experience [feeling] is somehow linked to the problem of time and change in physics."

No, the problem is that feeling is correlated with time and change in biological systems but no one can explain how or why.

-- SH





2009-05-21
The 'Explanatory Gap'
Reply to Stevan Harnad
JWKMM: "...the problem of conscious experience [feeling] is somehow linked to the problem of time and change in physics."

-- SHNo, the problem is that feeling is correlated with time and change in biological systems but no one can explain how or why.

The very nature of (human) consciousness may be linked to - or rather deeply imbued with - a sense of time and change (full stop).  Which is one of the reasons why, I often think, examples like "seeing red" give a very impoverished characterization of (the operations of) consciousness.

But this gets back to the issue which I have suggested is fundamental - to first try to say what we mean by consciousness - i.e.to stop speaking as if that is self-evident. That task alone should keep us busy for about 100 years...

DA


2009-05-21
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "We understand function well enough"

I strongly disagree. I would maintain that we do not understand function well enough and it is this lack of understanding that makes it appear as if there is an explanatory gap between physical theory and mental observation. Your comment is  a simple dismissal of my point without any detailed rebuttal, perhaps I did not make my argument sufficiently clear.  The crucial point in my argument is that current physical theory describes the universe as a four dimensional block structure with all events already fixed (Petkov 2006). In a block universe there is no change, no experience or becoming.  The block universe has "functions" in the sense that different times have different states but it has no way of getting from one state to another, it has no way for the observer to move through time to actually enact functions.  In our current understanding of the physical world functions are static and do not evolve so an "explanatory gap" between a functional description and our experience is inevitable.

Philosophers used to be able to brush aside the block universe description as somehow self-evidently absurd but in the past three years the existence of time has been confirmed experimentally by a rash of experiments that show electrons can interfere with their own, historical selves. The experiments use attosecond laser pulses "..for following the dynamics of time-dependent superposition of states" (Remetteri et al 2006), effectively creating a double-slit in the time domain (Lindner et al 2006, Ishikawa 2006).  These double slit experiments in the time domain are demonstrating the existence of time in the same way as the classical, spatial, double slit experiment provides a quantum mechanical demonstration of the existence of space.  They confirm the prediction of an existent time dimension that is implicit in special relativity theory (Petkov 2006).

If the block universe exists then as Weyl and recently Petkov(2002) have pointed out physicalism currently has no explanation for experience, becoming and change.  In a block universe change appears to be an unexplained property of the observer (Weyl 1920, Petkov 2002).  This absence of a widely accepted reason for change in physical theory does not mean that physicalism is false.  It simply means that physical theory is incomplete because it is unable to explain even how a traffic light actually changes state.  Physical theory needs to be developed so that change can be accommodated as a separate phenomenon from dimensional time. 

As a possible direction for the future amendment of physical theory Petkov (2002) suggests that  "it is natural to expect that the consciousness "operates' at a sub-micro level where the frozenness of our macro reality does not hold any more".  As an alternative, I would suggest, on purely empirical grounds, that there are more ways of ordering events than are permitted by the four dimensions of space-time (see <a href="http://newempiricism.blogspot.com/2009/02/time-and-conscious-experience.html">Time and conscious experience</a>). 

SH: "No, the problem is that feeling is correlated with time and change in biological systems but no one can explain how or why."

Stating that "feeling" is correlated with physical change is stating that feeling is correlated with an unknown physical phenomenon and hence, as yet, feeling is inexplicable by physical theory. So I agree that there is a problem explaining "feeling" but would point out that this problem arises because physical theory is incomplete, not because there is any fundamental separation between the physical and the mental.

References

Ishikawa, K.L. (2006) Temporal Young’s interference experiment by attosecond double and triple soft-x-ray pulses. PHYSICAL REVIEW A 74, 023806 ͑2006͒

Lindner, F., Schaetzel, F.G., Walther, H., Baltuska, A., Goulielmakis, E., Krausz, F., Milosevic, D.B., Bauer, D., Becker, W., and Paulus, G.G.. (2005) Attosecond double-slit experiment. Phys.Rev.Lett. 95,040401 (2005)

Petkov, V. (2002). Montreal Inter-University Seminar on the History and Philosophy of Science. http://alcor.concordia.ca/~vpetkov/absolute.html

Petkov, V. (2006). Is There an Alternative to the Block Universe View?, in: D. Dieks and M. Redei (eds.), The Ontology of Spacetime. Series on the Philosophy and Foundations of Physics (Elsevier, Amsterdam) http://philsci-archive.pitt.edu/archive/00002408/

Remetteri, T., Johnsson, P., Mauritsson, J., Varjui, K., Lepine, F., Gustafsson, E., Kling, M., Khan, J., Lopez-Martens, R., Schafer, K.J., Vrakking, M.J.J. and A. L'Huillier. (2006) Attosecond electron wave packet
interferometry. Nature Physics VOL 2 323-326 MAY 2006


2009-05-21
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "...the question of why we (or our brains) feel is a functional, causal question..."

If that is true, then presumably "the question of why we...feel" could only be answered by a functional, causal explanation. But functional, causal explanations are also given to answer the question of how the brain causes feeling. So, is there any distinction to be made between the question of how, and the question of why, we feel?

You indicated in your most recent response to Arnold Trehub that the explanatory gap is a question of "especially why" we feel, and David Chalmers appears to agree:

DC: "The explanatory gap comes from considering the question, Why, given that P ["the complete microphysical truth about the universe"] is the case, is Q ["phenomenal consciousness"] the case? (Why, given that P is the case, is there phenomenal consciousness? And why are there the specific conscious states that there are?)"

LC

2009-05-22
The 'Explanatory Gap'
Reply to Luke Culpitt

THE (NONEXISTENT) EFFECTS OF FEELING ARE A FAR BIGGER PROBLEM THAN THE UNKNOWN CAUSES OF FEELING


LC: "If... 'why we (or our brains) feel is a functional, causal question'..., is there any distinction... between the question of how, and the question of why, we feel? You indicated... that the explanatory gap is a question of 'especially why' we feel, and David Chalmers appears to agree..."
Both questions are functional, causal ones (but they are really flip sides of the same coin). 

"How" is about the causes of feeling and "Why" is about the effects of feeling. 

I don't know about David, but I don't lose much sleep about whether the brain causes feeling (of course it does); and if the only problem with explaining how the brain causes feeling had been some uncertainty about objective measurement of feeling, I would not give such a small explanatory gap much thought. 

No, for me the real puzzle is the "why" aspect rather than the "how" aspect. For whereas it is merely mysterious how the brain causes feeling (but there is no doubt that it does), the real explanatory puzzle is why the brain causes feeling, since there is no room for feeling to have any causal power of its own (even though it feels as if it does), except on pain of telekinetic dualism. That's the heart of the feeling/function problem -- and the real locus and force of the explanatory gap.


-- SH

2009-05-22
The 'Explanatory Gap'
(Is JWKMM perchance V. Petkov?) In any case, I think you have answered your own question: The quantum puzzles and their alleged implications for the causal explanation of dynamics would be there even in a feelingless universe, so they have nothing to do with the feeling/function problem and its explanatory gap.

-- SH


2009-05-22
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "In any case, I think you have answered your own question: The quantum puzzles and their alleged implications for the causal explanation of dynamics would be there even in a feelingless universe, so they have nothing to do with the feeling/function problem and its explanatory gap"

No, this misrepresents my post, I answered what I took to be the question confronting this thread:  "why can't we explain conscious experience (what SH calls 'feelings') in terms of physical theory?"   My answer was that:
Feelings are changing events, physical theory cannot (as yet) explain changing events.  Therefore physical theory cannot explain feelings

Physical theory cannot explain feelings because physical theory is incomplete so there is an apparent explanatory gap between physicalism and feelings.
My answer also has a bearing on your reply to LC:

SH: "..the real explanatory puzzle is why the brain causes feeling, since there is no room for feeling to have any causal power of its own (even though it feels as if it does)..."

The reason that there is a "puzzle" is that the question contains a hidden cosmological viewpoint called "materialism".    Materialism is a cosmological theory in which only the present instant exists and time unfolds as a succession of three dimensional forms (Whitehead 1920).  Materialism differs from physicalism, for instance, unlike physicalism materialism is a complete theory and it is known to be false.  Certainly in materialist cosmology "..the brain causes feeling, since there is no room for feeling to have any causal power of its own..".  However, materialism is known to be false and does not represent modern physicalism.

If we move on to modern cosmological ideas we have what are, for materialists, apparently outrageous answers to the question posed by SH.  For instance, current physical theory suggests that conscious experience (what SH calls "feelings") exposes the world as a changing present moment.  As Weyl put it:

"Only the consciousness that passes on in one portion of this world experiences the detached piece which comes to meet it and passes behind it, as history, that is, as a process that is going forward in time and takes place in space." (Weyl 1920).

SH asks "...why the brain causes feeling...?".  Current physical theory provides a shocking answer to this question. According to physical theory the four dimensional form of the brain contains a succession of states but it is the passage of conscious experience over these states that is change.  In other words conscious experience or "feeling" is the most likely candidate for enacting the events in the brain, not vice versa.

Alfred North Whitehead. (1920) "Time". Chapter 3 in The Concept of Nature. Cambridge: Cambridge University Press (1920): 49-73.

It is worth repeating Whitehead's analysis of Materialism:

"The eighteenth and nineteenth centuries accepted as their natural philosophy a certain circle of concepts which were as rigid and definite as those of the philosophy of the middle ages, and were accepted with as little critical research. I will call this natural philosophy 'materialism.' Not only were men of science materialists, but also adherents of all schools of philosophy. The idealists only differed from the philosophic materialists on question of the alignment of nature in reference to mind. But no one had any doubt that the philosophy of nature considered in itself was of the type which I have called materialism. It is the philosophy which I have already examined in my two lectures of this course preceding the present one. It can be summarised as the belief that nature is an aggregate of material and that this material exists in some sense at each successive member of a one-dimensional series of extensionless instants of time. Furthermore the mutual relations of the material entities at each instant formed these entities into a spatial configuration in an unbounded space. It would seem that space---on this theory-would be as instantaneous as the instants, and that some explanation is required of the relations between the successive instantaneous spaces. The materialistic theory is however silent on this point; and the succession of instantaneous spaces is tacitly combined into one persistent space. This theory is a purely intellectual rendering of experience which has had the luck to get itself formulated at the dawn of scientific thought. It has dominated the language and the imagination of science since science flourished in Alexandria, with the result that it is now hardly possible to speak without appearing to assume its immediate obviousness.

2009-05-22
The 'Explanatory Gap'
Reply to Jamie Wallace

Steve,

For the purpose of evolution, isn't feeling a necessary trait for the survival of complex organisms in a complex environment?

Would a complex organism and complex brain be able to evolve without feelings?

Please correct me if I am missing your point.


2009-05-22
The 'Explanatory Gap'

NO COMPLEXITY THRESHOLD FOR A PHASE TRANSITION INTO FELT FUNCTION -- AND THE WATCHMAKER IS BLIND TO FEELING TOO 


VP: "For the purpose of evolution, isn't feeling a necessary trait for the survival of complex organisms in a complex environment? Would a complex organism and complex brain be able to evolve without feelings? Please correct me if I am missing your point."
I am afraid you are missing the point: Darwinian evolution is, unproblematically, a causal, functional process. Survival, reproduction, behavior, behavioral skills, learning -- all of these are unproblematically functional. So are RNA, DNA, protein synthesis, physiological function, brain function: all functing.

But the explanatory gap is about explaining how and why some functions are felt. That includes explaining it adaptively, evolutionarily, in terms of mutations and selective advantages, for survival and reproduction, of felt functions over unfelt functions.

But the minute you propose a functional advantage that would allegedly be conferred by feeling X (e.g., pain), or by X's being a feltrather than jan unfelt function (seeing, vs. optical input processing), it becomes apparent that the functional advantages are identical (indeed Turing-indistinguishable), whether or not they are felt. Feeling does not -- and cannot, on pain of telekinetic dualism -- confer any functional advantages of its own. It merely dangles, inexplicably, and ineffectually.

That is the explanatory gap. Neither adaptive function nor brain function fills that explanatory gap. And simply assuming that there must be a function, even though for each candidate function the feeling can easily be seen to be functionally superfluous, is simply begging the question.

One thing is certain: If there is an answer, it will not be an easy answer. And saying "feeling must have survival value, somehow" would be an easy answer...

(Hand-waving about "complexity" (see Churchland's argument) won't help at all either. How/why should greater functional complexity (if such it is) become felt complexity, rather than just functed complexity, like the rest? What's the functional complexity threshold for a "phase transition" into felt function?)


-- SH

Harnad, S. (2002) Turing Indistinguishability and the Blind Watchmaker. In: J. Fetzer (ed.) Evolving Consciousness Amsterdam: John Benjamins. Pp. 3-18.  
Harnad, S.&Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling.Artificial Intelligence in Medicine  44 (2): 83-89


 


2009-05-22
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: ""How" is about the causes of feeling and "Why" is about the effects of feeling."

I'm unclear on what you mean when you say: ""Why" is about the effects of feeling." Are you talking about the same cause-effect pair in the above descriptions of "How" and "Why"? That is, do "the causes of feeling" (given in your description of "How") cause "the effects of feeling" (given in your description of "Why")? Because the effects of "the causes of feeling" would just be feeling itself, or the existence of feeling.

SH: "I don't know about David, but I don't lose much sleep about whether the brain causes feeling (of course it does); and if the only problem with explaining how the brain causes feeling had been some uncertainty about objective measurement of feeling, I would not give such a small explanatory gap much thought. "

Would it be fair to say, then, that the explanatory gap is not concerned with the question of how? Also, wouldn't the answers to the "how" question (or "the complete microphysical truth about the universe") exhaust all of the functional, causal explanations? If so, what form of explanation could answer the "why" question?

SH: "No, for me the real puzzle is the "why" aspect rather than the "how" aspect. For whereas it is merely mysterious how the brain causes feeling (but there is no doubt that it does), the real explanatory puzzle is why the brain causes feeling, since there is no room for feeling to have any causal power of its own (even though it feels as if it does)"

I don't believe that the explanatory gap is also a question of free will. The putative feeling of free will is just one feeling/sensation/perception/thought among many. The explanatory gap as I understand it is to provide an explanation for the mere existence of any and all feeling, in addition to the functional explanation for how the brain causes that feeling.

LC

2009-05-22
The 'Explanatory Gap'
Reply to Jamie Wallace

The project statement for the ANU Centre for Consciousness makes the following statement:

  The focus will be the question: how does human consciousness represent the world? 

As a cognitive neuroscientist, I would rephrase the question this way:

 The focus will be the question: how does the human brain represent the world?

 

My contention is that the biophysical activity of the brain mechanism that represents the world is consciousness. In this formulation, human consciousness is a biophysical function that serves the purpose of presenting the world to us so that it can be understood and adaptively engaged. There is abundant empirical evidence in support of this way of framing the matter.

It seems to me that when philosophy confronts consciousness formulated this way, the challenge for those who espouse an “explanatory gap” is to show why equating consciousness with the brain function that represents the world to us is untenable. In doing so, philosophy must clearly take serious account of current brain theory and relevant research findings.

 

.. AT

 


2009-05-22
The 'Explanatory Gap'
Reply to Stevan Harnad
It is possible that conscious observation is important in the evolution of our part of the multiverse.

The discussion of the block universe above suggests that change is in some way a property of conscious observers and quantum cosmology leads to explanations for the environment in terms of either an anthropic principle that generates conscious observers such as ourselves or some anthropic effect of observing systems on the systems that are observed (See Hawking 1999). These considerations hint that the part of the multiverse that we know and love may be, at least in part, selected by the existence of conscious observation. If these putative anthropic and causal effects or something similar actually occur then, in the absence of any conscious observers the part of the universe that we call home might begin to acquire very different properties and no longer support observing entities - even our favourite robotic machines might decay to dust.

It is conceivable that modern cosmology turns the old materialist certainties on their head with the requirements of conscious observation guiding the evolution of our part of the multiverse rather than a materialist universe selecting the conscious observer. OK, this is speculation, but the last half millennium of materialist cosmology does not even give us the possibility of speculation about why we are conscious.

Reference

Hawking (1999). Quantum Cosmology, M-theory and the Anthropic Principle. http://www.hawking.org.uk/index.php/lectures/physicscolloquiums/68

2009-05-23
The 'Explanatory Gap'
Reply to Stevan Harnad

AT: "In your reformulation, you speak of feelings as being 'unexplained' rather than 'inexplicable'. I have no problem with this change of stance."  
SH: "Again, if we agree that there is no explanation so far of how and why we feel rather than just funct, the burden is on those who think that there ever can be an explanation, in light of the causal obstacles (unlike anything else under the sun) that any explanation would have to surmount. Preferring "unexplained" to "inexplicable" does not help; it just gives the soothing feeling (without justification) that the mind/body (feeling/function) problem is just another problem science has not yet solved; no reason to expect it won't get round to it eventually..."

I didn't agree that there is no explanation so far of how and why we feel. I simply welcomed your change from asserting feelings to be inexplicable to being unexplained. I think that you might have valid reasons for believing that feelings are unexplained, but I don't think your stated reasons for asserting that feelings are inexplicable are valid.

SH: "If feeling were a "brain state" rather than an (unexplained and inexplicable) effect of a brain state, then instead of an effect of the brain state being a causal dangler, the brain state itself would be a causal dangler."

If feeling were a brain state, it would have all the causal biophysical properties of a brain state and could not be considered a "causal dangler".

 

AT: "As physical events, feelings must exist somewhere in the physical universe. A legitimate question is this: If a feeling does not exist as a part of the brain of the individual having the feeling, where does it exist?"
SH: "...nor does pinpointing where (or when -- or even what) we feel help to close the how/why gap one iota..."

Surely, if one claims that feelings are physical but are not located in the brain of the individual having the feelings, one should suggest where else they might be located. 

SH: "When I lift my finger, it feels as if I did it because I felt like it. In reality, my brain did two things: (1) it caused me to feel like lifting my finger and (2) it caused me to lift my finger. The "hard" question about causality, the one that creates the explanatory gap, is: how, and especially why, did my brain bother with (1) at all, since it is obviously causally superfluous for (2), an effectless (ineffectual) correlate (except if telekinetic dualism is true, which it's not)."

If lifting your finger were a reflex, then (1) would be superfluous. But if lifting your finger were an intended action, then you would have to feel like lifting your finger and (1) would be causal (not superfluous). It is also possible that you lifted your finger reflexively and then, after the fact, felt like you lifted your finger because you felt like doing it (1'). In this case (1) would not occur and (1') would be superfluous. A better example of the causal necessity of feeling is planning a trip. In this case you have to imagine (feel) all sorts of things in your phenomenal world before you can act --- make your selection of destination, consider possible weather conditions, when to leave, means of travel, what to pack, etc. 

.. AT


2009-05-23
The 'Explanatory Gap'
Reply to Arnold Trehub

AT : "The project statement for the ANU Centre for Consciousness makes the following statement:

  The focus will be the question: how does human consciousness represent the world? 

As a cognitive neuroscientist, I would rephrase the question this way:

 The focus will be the question: how does the human brain represent the world?"


I would rephrase it simply as: "what is human consciousness"?

One (huge) problem at a time...

DA

2009-05-23
The 'Explanatory Gap'
Reply to Luke Culpitt

FEELING WILLING

LC: "I don't believe that the explanatory gap is also a question of free will. The putative feeling of free will is just one feeling/sensation/perception/thought among many. The explanatory gap as I understand it is to provide an explanation for the mere existence of any and all feeling, in addition to the functional explanation for how the brain causes that feeling."
I am indeed arguing that they (the problem of explaining the causal role of willing and problem of explaining the causal role of feeling) are exactly the same problem, because the problem of feeling (consciousness) is the problem of the causal status of feeling. -- SH

2009-05-23
The 'Explanatory Gap'
Reply to Luke Culpitt
LC "I don't believe that the explanatory gap is also a question of free will. The putative feeling of free will is just one feeling/sensation/perception/thought among many. The explanatory gap as I understand it is to provide an explanation for the mere existence of any and all feeling, in addition to the functional explanation for how the brain causes that feeling."

Part of Luke's comment ("an explanation for the mere existence of any and all feeling") seems to me to be getting somewhere near the real problem.  Surely, before anything, the so-called explanatory gap is to explain what (human) feeling, thought, experience - in a word, consciousness - is

Just take one bite of this: what do we mean by a 'feeling?'   What are the elements of a (human) 'feeling"?  What are not? How does it differ from a thought?  What, for that matter, is a thought?  None of this is remotely straightforward or self-evident.  If anyone thinks it is, I invite them to give me a definition of - eg - a 'feeling'.  I can virtually guarantee it will have huge holes in it. So if we can't even say what a feeling is (I certainly have huge difficulty), don't we obviously have a problem at a much more basic level than all this talk of cause-effect, etc etc? Cause and effect between what and what?

The second problem I see in this discussion is the blithe talk of the 'explanatory gap' as if it were a self-evident, self-explanatory fact.  The key 'fact' - if there are any facts at all in this area - is that we are still groping around in the dark where the most basic aspects of consciousness are concerned (eg what is it?). So the phrase 'explanatory gap', as I've said several times, is a kind of philosophical whistling in the dark. The alchemists in medieval times no doubt thought there was just an 'explanatory gap' between what (they thought) they knew and how to turn lead into gold...

DA

2009-05-23
The 'Explanatory Gap'
Reply to Arnold Trehub

HOW/WHY IS PLANNING FELT?



AT: "If feeling were a brain state, it would have all the causal biophysical properties of a brain state and could not be considered a 'causal dangler'."

Feeling is and remains a causal dangler until it is explained how and why certain brain states are felt rather than just "functed." That is precisely as true whether we assume feeling is a "brain state" or feeling is an "effect" of a brain state. Causality (both coming and going) is the problem, either way.

AT: "Surely, if one claims that feelings are physical but are not located in the brain of the individual having the feelings, one should suggest where else they might be located." 

The problem is not the locus of feelings, but their causal status.

AT: "If lifting your finger were a reflex, then [feeling like doing it] would be superfluous. But if lifting your finger were an intended action, then you would have to feel like lifting your finger and [feeling like doing it] would be causal (not superfluous)." 

What on earth does "intending" mean, other than feeling like doing it? Your reasoning is unfortunately circular.

To break out of the circle, explain to me how and why intentional action is felt rather than just functed. A reflex is not only nonintentional (it feels like something, but something passive): it is also simple and automatic. Intentional action is often more complex than a reflex, to be sure (though intentionally lifting a finger is not, and that's why it's better to stick to that example); but how (and even more importantly, why) should the planning of a complex action be felt, rather than just functed, like a reflex?

AT: "It is also possible that you lifted your finger reflexively and then, after the fact, felt like you lifted your finger because you felt like doing it. In this case [feeling like doing it before the fact] would not occur and [feeling like doing it after the fact] would be superfluous." 

And your point is...? 

The question was: How/why is feeling like doing it "before the fact" not superfluous too? (By the way, the "fact" here, as always, is the act; so the question is, what's the point of feeling before the act? Planning before the act is of course unproblematically functional and causal -- but, again, why felt planning, rather than just "functed" planning (e.g., as in a computer or robot)?

AT: "A better example of the causal necessity of feeling is planning a trip. In this case you have to imagine (feel) all sorts of things... before you can act --- make your selection of destination, consider possible weather conditions, when to leave, means of travel, what to pack, etc." 

How/why felt (rather than just functed) selection of destination?

How/why felt (rather than just functed) consideration of possible weather conditions?

How/why felt (rather than just functed) consideration of when to leave?

How/why felt (rather than just functed) consideration of means of travel?

How/why felt (rather than just functed) consideration of what to pack, etc.?

Your reasoning is completely circular, Arnold! You simply take it for granted that certain functions are felt, and as a result you are simply begging the question, with your comfortable focus on brain function: Brain function will explain the causal basis of everything we can do, such as all the things listed above (and lifting our fingers too), but it won't explain how or why any of that functing is felt.

And that's the "hard problem" and the locus of the "explanatory gap". It's a causal gap -- or rather a gap in ordinary causal explanation, which works just fine for everything else, from neutrons to neurons. 

(Please, please let not another quantum mysterian chime in on the QM entanglements of neutrons!)

-- SH



2009-05-23
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan,

SH:  “I'll settle for your solution to the simple problem of how and why feeling (rather than just functing) is a way of knowing -- as soon as you explain it...”

Your repeated "how/why" questions presuppose the very distinction which is in question here, namely that between feeling and functing.  Until this distinction is clarified, we will remain at an impasse.

I am not persuaded by your sentio or your claim that "feeling something" is an uncomplemented category.  As far as I can tell, your feeling/functing distinction is incoherent (as I will explain in the rest of this post).  It undermines any possible discussion of feelings.  For if feelings have no causal efficacy, they do not make a difference to anything, including the conclusions we draw in our discourse on feelings.  So why do we have words for them? 

Your view makes all talk of feelings superfluous, including the claim that there is a feeling/functing distinction.  This is why I said that, in your view,  the existence of feelings could not be a motivating factor for your position here.  The existence of feelings could not support the conclusions you wish to draw.  Why postulate them?


No Uncomplemented Categories

SH:  “I do not see that anything I have said has anything to do with grammar!”

When I mentioned "grammar" in my previous post, I was using it in the Wittgensteinian sense, which is not purely syntactical.)  The notion of "what it is like to be a bachelor" does not pick out any particular feel or category.  To answer the quesiton, a bachelor (who has never been married) might evaluate those aspects of his life which seem to be dependent on his not being married.  He might thus say, “being a bachelor is okay.  I only have to worry about myself.  I can do what I want.  Etc.”  None of that suggests any uncomplemented categories.  And it does not suggest that his conception of being a bachelor somehow preceeded his answer to your question.

There is no reason to think that being a bachelor has a uniquely identifiable feel which is only positively sampled by all bachelors.  On the contrary, there is good reason why we shouldn’t think there is anything in particular it is like to be a bachelor.  Generally speaking, there is nothing it is like to not have a third arm, and nothing it is like to not have four arms, and nothing it is like to not have a slightly worn edition of A Tale of Two Cities.  If we admitted all of these “what it is likes” into our experiential set, then each person would have to “sample” (to use your word) an infinite number of feels before they could know what it is like to feel anything at all.

With the category of "feeling something," we are similarly dealing with a family resemblance concept.  There is no "invariant feeling" running through all feelings.

To complement the category of feeling something, we don’t need to know what it feels like to feel nothing at all.  Rather, we must simply have the category of not feeling anything.  And we have that category. 

As far as I know, rocks do not feel anything.  I can regard entities as not having any feelings.  I can distinguish between something which feels and something which does not feel.

This can be explained in the same way that the generic category of “feeling something” is explained.  We have positive and negative categories for feelings.  Some feelings are categorizable as “not feeling boredom” and others as “not tasting mustard.”  I can thus form the categories of “not feeling this” and “not feeling that,” and I can further abstract and form the category, “not feeling anything”.  This is exactly what we do when we abstract from “feeling this” and “feeling that” to “feeling something.”  So why talk about uncomplemented categories here?

Despite your assertion to the contrary, we do not know “what it feels like to feel anything at all, be it headache or toothache.”  “Anything at all” does not pick out any particular experience.  There is nothing it is like to feel anything at all.

The abstract category of “feeling something” does not feel like something in general; rather, it feels like a particular concept.  Similarly, the category of “feeling nothing at all” does not feel like nothing at all.  We do not feel what it feels like to feel something in general, just as we do not feel what it feels like to feel nothing at all.  We feel what it feels like to think about feeling something, of course, but we also feel what it feels like to think about feeling nothing.  There is no lack of complement here.

The Sentio vs. The Cogito 

SH: “Descartes put it in an awkward way. It sounds as if the Cogito ergo sum proves more: as if it proved that  an ‘I’ exists.”

But that was the whole point.  Why claim that this is putting “it” awkwardly?  What is “it” here?  Descartes' explicit claim was that the cogito established to himself that he existed, and that he was “a substance whose essence, or nature, was nothing but thought” (Discourse On Method, Part 4.)  His conclusion was explicitly dualistic: His soul was res cogitans; his body was res extensa.  And from there, he went on to prove the goodness of God and, only then, the trustworthiness of mathematics.  According to Descartes, there was no certainty before the cogito, not even the certainty we associate with logical and mathematical judgments.

You claim that, because the cogito is merely a tautology, it needs to be reformulated so that we can better understand its significance:  that it indicates something unique about feelings; specifically, that we have some peculiar access to the feeling of feelings.  Yet, tou misrepresent the cogito as “I am thinking, therefore I am thinking.”  That is obviously a tautology, but it is not the cogito.  The cogito is not a tautology, but an inference following modus ponens.  (If I am thinking, then I exist.  I am thinking, therefore I exist.)  You misrepresent it as "I am thinking, therefore I am thinking."  And you claim your sentio improves on this, because it reveals something about the nature of feelings.  I do not see how this is so.  For one thing, your "I feel, therefore feeling is felt" is not a valid inference, because there is no feeling of feeling.  So, rather than improve upon the cogito, it seems to only confuse matters.

Wittgenstein:  “It may easily look as if every doubt merely revealed an existing gap in the foundations; so that secure understanding is only possible if we first doubt everything that can be doubted, and then removed all these doubts.”  (Philosophical Investigations, section 87)

The cogito is a valid inference.  I do not question that.  I only question its philosophical importance.  More specifically, I reject the claim that it indicates or establishes a special kind of knowledge which you call “Cartesian certainty.”  And I reject Descartes’ views that it establishes mind/body dualism and provides a foundation for all our knowledge.

Wittgenstein’s point is that there is no gapless foundation to be revealed, because the foundations are not there ahead of time, waiting to be discovered.  We may doubt as far as want, but this will only lead us to exhaustion. Foundations are to be built, not uncovered.  We could thus say that Descartes was working in the wrong direction.

The cogito only serves as a reminder that the sentence "I do not exist" is not a valid proposition in our language.  It is a reminder of the rules of our grammar, and not a foundation for knowledge. 

Descartes point was that his mind was distinct from his body, because he could doubt the latter but not the former.  But how could one doubt that one had a body?  Just try—in a real case, with discernible consequences—try to doubt that you have a body.  What could such “doubt” consist in, if not just the words, repeated either to yourself or out loud:  “I don’t have a body . . . I am just a mind?” 

Those are empty words, no different than, “I don’t have a mind . . . I am just a body.”  Repeating them does not constitute doubt, because these words have no discernable consequences.  They are insignificant.  It would make as much sense to say, “all logic is invalid . . . there are no valid inferences,” or perhaps, “there are no thoughts, only words; no feelings, only functions.”  Such mantras are not to be taken seriously.

A Word About Some Theistic Arguments

SH:  “Theistic??? I have inferred (by abstracting the common invariant across many postings) that NA has some sort of thing about "analytic philosophers." Do you perhaps have some sort of bugaboo too -- with "theists"?”

I think you misunderstood my point.  I was not accusing you of being theistic, nor was I making a point about theists or theistic arguments in general.  I only pointed out that your argument resembles some unconvincing theistic arguments. 

This is a point worthy of some reflection.  While the theistic arguments I’m talking about are varied, they all proceed roughly as follows:  God’s existence is self-evident by the very fact of knowledge.  Therefore, a person who claims that God does not exist is begging the question against theism and is denying their own knowledge.  (You can produce variations on this argument by substituting “truth,” “value,” “morality,” or “meaning” for “knowledge.”)

This argument is meant to show that theism is not only valid, but a necessary presupposition of any system of values or knowledge.  Do you find the argument compelling?

I don’t, because it presupposes that the term “God” has a well-defined meaning, and that the theistic presupposition is coherent. 

Your argument for a functing/feeling dichotomy is similar.  You claim that the unique status of feelings (be it epistemic or ontological or both) is self-evident, and that it is self-evident by the very fact of feeling.  You defend this notion by accusing those who reject it of begging the question and denying their knowledge of feelings. 

How is your argument different from those theistic arguments?


2009-05-23
The 'Explanatory Gap'
Reply to Derek Allan
I agree with your (DA's) "second problem" entirely.  There is no real "explanatory gap", just an apparent gap due to our lack of knowledge of the problem and the incompleteness of physical theory. 

The philosophical "explanatory gap", the one that seems, as you say, "a self-evident, self-explanatory fact" is a result of implicitly or explicitly applying materialist cosmology to the problem of conscious observation and experience.  If the world works solely by the transfer of impressions from place to place (cf:Aristotle) or by things just pushing one on another (cf:Leibniz) or as a result of the transfer of information from place to place (cf: Searle, Harnad)etc. then we cannot ever explain our conscious experience.  However, materialist cosmology is known to be false so the whole enterprise of defining an insurmountable gap is absurd.  

The results of the acceptance of the explanatory gap as a real and insurmountable obstacle has interesting results.  As an example, eliminativism accepts materialist cosmology as true and so denies that conscious experience can exist because it cannot be explained.  As another example, dualism accepts materialist cosmology as true but cannot reject conscious experience and so invents a supernatural or alternative world as the home for mind.  It is amazing that such ideas are so popular given that materialist cosmology is known to be false.

2009-05-24
The 'Explanatory Gap'
JWKMM: I agree with your (DA's) "second problem" entirely.  There is no real "explanatory gap", just an apparent gap due to our lack of knowledge of the problem and the incompleteness of physical theory.  ......  The philosophical "explanatory gap", the one that seems, as you say, "a self-evident, self-explanatory fact"...

I'm sorry. This is not what I said. My problem is with the idea of a 'gap' itself.  It implies, as I've said several times, that we are on the right track but just can't manage to bridge the final 'gap'.  I do not think that there is any solid evidence that any of the physicalist approaches that prevail in analytic philosophy's accounts of  consciousness are on the right track. Hence my references to medieval alchemists.

And I did not say that the "explanatory gap is a self-evident, self-explanatory fact". I was objecting to "the blithe talk of the 'explanatory gap' as if it were a self-evident, self-explanatory fact." In a nutshell I was suggesting that the phrase is a philosophical red herring.

PS: I was hoping that someone might respond to my questions: "What do we mean by a 'feeling?'  What are the elements of a (human) 'feeling'?  What are not? How does it differ from a thought?  What, for that matter, is a thought?"  Since thoughts and feelings are normally seen as elements of consciousness, and I have been assured more than once that there is no problem in defining consciousness, I assumed that answering these questions would be a doddle...

DA





2009-05-24
The 'Explanatory Gap'
Reply to Derek Allan

AT: "... The focus will be the question: how does the human brain represent the world?"

DA: "I would rephrase it simply as: "what is human consciousness? One (huge) problem at a time..."

AT: "My contention is that the biophysical activity of the brain mechanism that represents the world is consciousness [emphasis added]."



As you can see, from my perspective as a cognitive neuroscientist, your human consciousness is just the biophysical activity of the brain mechanism that represents the world to you. This definition is straightforward and simple, but the implications are very wide and deep. For example, the structural and dynamic properties of the brain mechanism that represents the world can contribute to a better understanding in your own area of specialization -- the philosophy of art.


.. AT



2009-05-24
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "And that's the "hard problem" and the locus of the "explanatory gap". It's a causal gap -- or rather a gap in ordinary causal explanation, which works just fine for everything else, from neutrons to neurons. 

(Please, please let not another quantum mysterian chime in on the QM entanglements of neutrons!)"

The plea is obviously directed at me.  I would like to defend myself against the accusation of "mysterianism".

Mysterianism is defined as being  "..characterized by two features: 1Ontological naturalism: the view that holds (inter alia) that consciousness is a natural feature of the world; 2 Epistemic irreducibility: the view that holds that there is no explanation of consciousness available to us". (Blackwell Companion to Consciousness).

I admit to agreeing with (1) but not with (2). So rest assured, I will not chime in with a mysterian point, I will just restate the fact that very recent experiments strongly support the relativistic prediction that time exists as a physical dimension so philosophers will need to consider causal relations in the context of a block universe.  Physical theory cannot (and could never) explain why an action potential actually moves up a membrane or why a neutron is emitted at a particular moment from a mass of U235.  It can provide functional descriptions of how things are laid out in time but why the events of one instant change to the events of the next instant is unknown.  Incidently, this "problem of the philosophy of change" has been around since the Elean School and is just confirmed as a serious problem by the demonstration that dimensional time exists.

As was pointed out above, it appears that "Ordinary causal explanation" is actually due to the effect of conscious observation enacting change.  So when it is claimed that ordinary causal explanation explains most events in the world it is really being claimed that the conscious observation of events explains the changes in the world.

As MacTaggart (1908) deduced, if we are able to assign past/present/future to events there must be something about us that is outside time (although MacTaggart drew the wrong conclusion from this deduction - see McTaggart's unreality of time). If there is something outside of dimensional time about conscious observation then it would not be surprising if ordinary functional descriptions fail.  What we need to resolve this problem is a better physical theory of the world that explains change - the non-dimensional aspect of time.


McTaggart, J. (1908). The Unreality of Time. Mind: A Quarterly Review of Psychology and Philosophy 17 (1908): 456-473. http://en.wikisource.org/wiki/The_Unreality_of_Time



2009-05-24
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "As you can see, from my perspective as a cognitive neuroscientist, your human consciousness is just the biophysical activity of the brain mechanism that represents the world to you."


What is the (philosophical) basis for this claim?  That is, on what basis do you claim human consciousness is nothing but "biophysical activity"? 

When I add 2 and 2, or feel angry, or reflect on what I did yesterday, am I merely under some kind of illusion that I am doing these things, and is all that really nothing but neurons moving about in my head? (And how could I even have such a thing as "an illusion" if that too is nothing but moving neurons?)  In short, what meaning does the word 'conscious' have for you, if any?  

(I will leave aside the major philosophical problems associated with your words 'represents' and 'world'.)

DA

2009-05-24
The 'Explanatory Gap'
SH: (Please, please let not another quantum mysterian chime in on the QM entanglements of neutrons!)"
JWKMM: "I would like to defend myself against the accusation of "mysterianism"... defined as... 1 Ontological naturalism: the view that holds (inter alia) that [feeling] is a natural feature of the world; 2 Epistemic irreducibility: the view that holds that there is no explanation of [feeling] available to us".... I admit to agreeing with (1) but not with (2). So rest assured, I will not chime in with a mysterian point, I will just restate the fact that... Physical theory cannot (and could never) explain why an action potential actually moves up a membrane or why a neutron is emitted at a particular moment from a mass of U235..." 

By this definition I am more than happy to declare myself a feeling ("qualia") mysterian; but what I was referring to was quantum mysterians (which you assuredly are!); and, in particular, the importation of quantum mysterianism into the sanctum of qualia mystery: Two unrelated koans neither explain, eliminate nor engulf one another... 

-- Joshu(a)


2009-05-24
The 'Explanatory Gap'

COMPLEMENTING DESCARTES


JS: "Your repeated "how/why" questions presuppose the very distinction which is in question here, namely that between feeling and functing.  Until this distinction is clarified, we will remain at an impasse."

How about the distinction between feeling and doing, then? Is that clear enough? (It's much the same distinction.) 

How and why the brain causes adaptive behavior is a tractable scientific question, a functional one, that will one day have a full, clear answer. 

Not so for how (and especially why) the brain causes feeling. (And that's the point, and the problem, and the gap.)

JS: "if feelings have no causal efficacy, they do not make a difference to anything, including the conclusions we draw in our discourse on feelings.  So why do we have words for them?"

(1) Feelings are there, being felt (when they are being felt).

(2) There is an (unexplained -- and I think causally inexplicable, though undoubtedly -- if not undoubtably -- causal) correlation between our feelings and our doings (hence between our feelings and our sayings), probably explained by the common functional cause that (explicably) causes the doings and (inexplicably) also causes the feelings.  

So there are feelings there, to speak of, and we do speak of them; and speaking certainly has causal consequences. But until and unless there can be a causal explanation of how and why we feel, the only available explanation of why we speak of feelings is that the same cause that (inexplicably) makes us feel and (explicably) act also (mysteriously) makes us speak of feeling; but the fact that we actually feel has no independent causal role, hence no causal explanation. It just dangles on the joint cause of the feeling (unexplained) and the speaking.

(I did speculate a bit -- on one of the earlier threads of this discussion: "WHY WOULD TURING-INDISTINGUISHABLE ZOMBIES TALK ABOUT FEELINGS (AND WHAT, IF ANYTHING, WOULD THEY MEAN)?" -- concerning why Turing-Test-scale robots, with behavioral capacities indistinguishable from our own -- if they were feelingless Zombies -- would speak of feelings at all. One possibility might be that the words would be used as metaphors for unobservable internal states -- unfelt states, but also states that are inaccessible to other agents with which the TT-passing robot must interact adaptively. So "you have hurt me" might be a short-hand for "you have caused damage to my internal functioning." That would make feeling-talk ("mind-reading") functional rather than a dangler, like feelings themselves. But I have not yet carried through the exercise so far as to try to construe what functional role "feeling" talk could play if the exchange between us [in this very email dialogue] were taking place between Zombies, and they were talking specifically about the difference between the functional role of talk about feelings between feelingless Zombies versus talk about feelings between feeling people. Maybe that's just further evidence that there could not be feelingless Zombies Turing-indistinguishable from us. But unfortunately that leaves completely unanswered, yet again, the [same old] question, this time in the form: how and why not! Same old explanatory gap... [Peter Carruthers has a recent target article on this in BBS, but I think he gets it somewhat backwards: it is feeling that is primary, not mind-reading, whether of the unobservable states of others or one's own...])

JS: "Your view makes all talk of feelings superfluous, including the claim that there is a feeling/functing distinction."

No. It just points out that how and why we feel is unexplained (and how and why I think it is also inexplicable: functional superfluousness; no telekinesis; causal inexplicability). 

JS: "The notion of 'what it is like to be a bachelor' does not pick out any particular feel or category."

"What it feels like to be a bachelor" picks out what every waking minute feels like (to a human male) from birth to the first minute one gets married -- at which point it is complemented (and one discovers how right or wrong one had been about "what it feels like to be a bachelor"). No such possibility for what it feels like to be awake, or alive...

JS: "there is nothing it is like to not have a third arm..."

That's largely true (except in contrast to what it feels like to have a third arm, as, say, siamese twins, spiders, or a surgically-altered-me might experience). 

But in general I do agree that arbitrary counterfactual complementations are of no more interest than "what it feels like to see something that is bigger than a breadbox" (which does happen to be complemented) or "what it feels like to have lived fewer than an infinite number of years" (which is not).

We only single out categories in cases where the complement is in some way salient (and where the invariant features of the category members -- relative to the complement members -- are used to resolve uncertainty about what is a member of the category and what is a member of its complement). It does make sense to say "I know what it feels like to be a bachelor," and I can even discover that I was wrong. 

In much the same way, it does make sense to say "I know what it feels like to be alive" or "I know what it feels like to be awake." And we probably do have a pretty good idea from our positive-only evidence. But the difference is that there is no way we can discover whether we were spot-on or not quite right; and perhaps we are not really justified in making all the inferences we tend to make from our uncertain grip on these problematic categories. 

(The standard kluge we use for "what it feels like to be alive" is to complement it with analogies, including an imaginary afterlife or rebirth; and for "what it feels like to be awake" we incoherently complement it with what it feels like to be asleep and dreaming -- which is of course not exactly a "nonawake" experience in the same way that delta [dreamless] sleep is -- but in delta sleep you're gone, so there is no one feeling what it's like...)

JS: "If we admitted all of these “what it is likes” into our experiential set, then each person would have to “sample” (to use your word) an infinite number of feels before they could know what it is like to feel anything at all."

No, not only do all those hypothetical complements never occur to us, but even when they do, they can easily be dismissed as arbitrary, inconsequential and uninformative. Not so for some of them, though, because we persist in thinking of and speaking of them as if the distinction were salient: "It feels good to be alive" or "Some of my brain functions are felt and others are not." Nor are the intended distinctions empty in those cases. They are merely uncomplemented, hence problematic.

(On arbitrary negative categories and their relation to our sense of similarity, see also Watanabe's "Ugly Duckling Theorem."

JS: "There is no "invariant feeling" running through all feelings."

The reason there is no functional invariant here is that it is normally the complement that determines what is and is not invariant in a category: The invariant is relative, based on contrasting what all members of the category share and what all members of its complement lack. (Please let's not get into family resemblances: invariants can be disjunctive and conditional too.) But with positive-only categories, we nevertheless have access to what all the positive instances have in common. After all, we do know we are feeling when we are feeling. We are never in doubt about that...

JS: "To complement the category of feeling something, we don’t need to know what it feels like to feel nothing at all.  Rather, we must simply have the category of not feeling anything.  And we have that category."

I'm afraid not. The positive category is "what it feels like to feel something" and hence the complement would have to be "what it feels like to feel nothing at all." And that category is empty, hence we have no idea (or only incoherent fantasies) of "what it feels like to feel nothing at all." 

(Your error is, I think, a bit like mixing up the categorical distinction between (1) what is alive versus what is non-alive with the categorical distinction between (2) "what it feels like to be alive" versus "what it feels like to be non-alive": We have no trouble distinguishing things that are alive from things that are dead [or have never been alive]; but we never even face the problem of distinguishing "what it feels like to feel something" from "what it feels like to feel nothing at all," because the latter is impossible, hence empty. The only reason you have that category in your repertoire at all is that you are going by the positive instances plus some provisional analogy-based imaginary complement -- as I would be doing, in imagining what it would feel like to be married, whilst I'm still a bachelor -- except that in the case of "what it feels like to feel something" it is certain the imaginary complement is impossible, hence empty.)

(I think you may also be missing the essentially relational nature of feeling: the feeling is always felt, hence it has an implicit feeler: this is taken up in the discussion of the cogito, later below.)

JS: "I can distinguish between something which feels and something which does not feel."

Of course you can, but that's like distinguishing between something that's alive and dead (as in (1) above). That's not the category we're talking about! (We are talking of (2), above.)

JS: "We have positive and negative categories for feelings.  Some feelings are categorizable as “not feeling boredom” and others as “not tasting mustard.”  

I've mentioned these before too. You are complementing the wrong category. What it feels like to feel this (versus that) is perfectly well-complemented. But that's no help if the category in question is "what it feels like to feel something (anything) at all" versus "what it feels like to feel nothing at all." 

(An analogy: If the only sense-modality were vision, and the only experience were to see shapes, and all shapes were colored -- counting black as a color -- then the subordinate category "red" would be complemented by anything non-red, but the superordinate category "colored" would be uncomplemented.)

JS: "I can thus form the categories of “not feeling this” and “not feeling that,” and I can further abstract and form the category, “not feeling anything”.  This is exactly what we do when we abstract from “feeling this” and “feeling that” to “feeling something.”  So why talk about uncomplemented categories here?"

You're simply repeating, I think, your conviction that in complementing subcategories of a category against other subcategories of a category, we are somehow also complementing the category as a whole, against its own complement. But we are not. You are making a category error...

JS: "Despite your assertion to the contrary, we do not know 'what it feels like to feel anything at all, be it headache or toothache.'  “Anything at all” does not pick out any particular experience.  There is nothing it is like to feel anything at all."

The category in question is "what it feels like to feel something," where the something is anything that can be felt. That's no different from saying that once a child has learnt the category "dog," he knows what a dog is, and can now (correctly) recognize any dog at all, not before seen, as a dog. The same is true for "feeling": We (correctly) recognize any feeling we feel at all as a feeling. The difference is that the child has learned the category "dog" from having sampled both dogs and non-dogs, and abstracting the invariants that reliably distinguish any dog at all from non-dogs. 

We have done only part of that with feelings: We can (correctly) recognize "what it feels like to feel something" on every occasion, but we really have no idea how to distinguish "what it feels like to feel something" from "what it feels like to feel nothing at all" (even though we think we have) because it is impossible to feel "what it feels like to feel nothing at all."

JS: "The abstract category of “feeling something” does not feel like something in general; rather, it feels like a particular concept."

The category in question is "what it feels like to feel something" -- not "what it feels like to have the "concept" of someone feeling something" (or of someone being alive, or of someone being awake).

JS: "Similarly, the category of “feeling nothing at all” does not feel like nothing at all."

It sure doesn't, for that would be a contradiction in terms. The category is as empty as a square circle. Only Meinong can manage such a feat...

JS: "We feel what it feels like to think about feeling something, of course, but we also feel what it feels like to think about feeling nothing.  

The uncomplemented category in question is not "what it feels like to think about feeling something," it is the category "what it feels like to feel something."

JS: [Re: The Sentio vs. The Cogito] "Descartes' explicit claim was that the cogito established to himself that he existed."

Actually, what was demonstrated (via the "method of doubt") was that it was not true that the necessary truths of mathematics were the only things one could be certain about. There was one other thing. The Cogito, which is that when I am thinking, I cannot doubt that there is indeed thinking going on.

A slight (strategic) mistake was to focus on "thinking" (a rather vague category) rather than feeling (something we all immediately know is happening, when it is happening). 

And a slightly bigger (exegetic) mistake was to infer that the indubitable truth of the cogito was not just that I cannot doubt that I'm feeling when I'm feeling, but that therefore "I" exist (for if the category "thinking" is vague, the category "I" is even vaguer: not empty, just vague) -- rather than just that feeling exists. 

Ergo Sentitur suffices, without overstating the case... It already shows that one does not have to be uncertain about everything other than the necessary truths of mathematics (e.g., the reality of the physical world, the existence of other minds, the truth of scientific laws). One can also be certain that feeling exists.

Feeling! That one certainty among all the other undoubtedly true yet doubtably uncertain truths, such as the physical world, scientific laws, induction, causality, "functionalism." And that one certainty amidst all that less-than-certain functing -- turns out to be a causal dangler, giving rise to an unbridgeable explanatory gap!

JS: "His soul was res cogitans; his body was res extensa.  And from there, he went on to prove the goodness of God and, only then, the trustworthiness of mathematics."

I'm neither a philosopher nor a historian, but I'll bet Descartes did not believe most of that voodoo (which is certainly not what he is rightly famous for). He just said it to avoid the ire of the Inquisition. (I believe he at certain points even stated explicitly that not every cartesian claim he was making was true, hence we would need to read attentively between the lines.)

Certainty about the truth of "not (p and not-p)" is based on necessity (pain of contradiction), not on the benignity of deities; and the certainty of sentio ergo sentitur is based on the (inexplicable, but indubitable) reality of feelings. "Dualism" was just a sop for the metaphysical bean-counters of the day. The force of the Cogito is epistemic, not ontic. The sceptics were not denying the reality of the physical world, just its certainty. By the same token, it was not news that feelings existed; the news was that that was a truth that -- unlike the existence of the physical world -- we could be certain about: as certain as of the necessary truths of mathematics. 

(But a consequence of that same, certain truth, happens to be that there is no way to account for feeling physically [i.e., functionally].)

JS: "You claim that, because the cogito is merely a tautology, it needs to be reformulated so that we can better understand its significance"

The cogito has to be reformulated in terms of feeling rather than thinking, and its conclusion is the fact that it cannot be denied, when feeling is going on, that feeling is going on. But that is not a tautology, even though it sounds like one! I don't know which one of Kant's baroque categories is the right name for it, but the cogito is either a "synthetic a-priori" or an "analytic a-posteriori": it certainly isn't an analytic a-priori (i.e., a tautology).

It cannot be denied that when flying is going on, then flying is going on: That is a tautology. A universal, non-existential statement, necessarily true "in all possible worlds."

But the fact that "it cannot be denied, when feeling is going on, that feeling is going on, hence it is certain that feeling exists" depends essentially on what each of us has actually felt, namely feeling. It is an existential statement that follows from the direct experience of each and every (sentient) one of us.

JS: "The cogito is not a tautology, but an inference following modus ponens.  (If I am thinking, then I exist. I am thinking, therefore I exist.)  You misrepresent it as 'I am thinking, therefore I am thinking'." 

I agree that the cogito is not a tautology (I never said it was). But the right way to put it is that if I am feeling, then feeling exists. (The "I" is a fuzzier, theory-laden notion, not further licensed as "certain" by the cogito. At best, we can say that "it feels like an 'I' exists": but, by the same token, it feels like a physical world exists too, and that's not certain either!) 

I will say this much more, though: Feeling is essentially a "two-part relation": Whenever there are feelings, the feelings are being felt. So it is intrinsic to a feeling that there is both feeling and "feeler." I'm not talking about a fancy self-concept. Just the fact that although there is such a thing as "free-floating depression" in the sense of a depression without a perceived external cause, there is no such thing as a free-floating depression -- or any feeling -- that is unfelt. An unfelt feeling is a contradiction in terms. To that extent, a feeler is intrinsic to feeling, so the existence of a feeling to that extent entails the existence of a feeler. Maybe that's what Descartes meant by the "ego" in the "sum." But that fleeting frame for any feeling is far from what most of us mean by an ego or self, let alone the reality of an immaterial, immortal soul!  

JS: "...your "I feel, therefore feeling is felt" is not a valid inference, because there is no feeling of feeling."

I would say quite the opposite: There is no unfelt feeling. A feeler/felt relation is intrinsic to feeling. And if that's what Descartes meant by "I exist" then he was right again. But that "I" is simply an intrinsic part of the nature of feeling itself. So the existential claim of the cogito (sentitur) is still only that feeling exists. The feeling/felt relation just comes with the territory. (One cannot be certain, for example, that the feeler of the feeling is the same feeler as an instant ago: that does not sound like a sound basis for an enduring ego, let alone an eternal soul...)  

JS: "I reject the claim that [the cogito] indicates or establishes a special kind of knowledge which you call 'Cartesian certainty'.” 

Call it what you like; it's the only truth other than the necessary truths of mathematics about which we can be dead-certain.

JS: "And I reject Descartes’ views that it establishes mind/body dualism and provides a foundation for all our knowledge."

(1) "Mind/body dualism" is a figure of speech; it means next to nothing. What the certain existence of feelings establishes is the certainty of the existence of something that cannot be explained in the same functional way that the rest of what exists (truly, but without the added boost of certainty) can be explained. Reformulated as the "feeling/function" problem, it becomes obvious that the problem is one of explanation -- explaining how and why there is feeling rather than just functing.

(2) Without feeling, there would be no "knowledge," only functing. (I never said or invoked a single word about "foundations of knowledge.") "Knowledge" in books and computers and (insentient) robots is not knowledge; it is just data and dynamical states. The only knowing is felt knowing. Ditto for meaning.

JS: "Wittgenstein’s point is that there is no gapless foundation to be revealed."

Wittgenstein seems to have spent half his life trying to build foundations and the other half tearing them down. That's fine, but it has next to nothing to do with the rather straightforward, non-foundational question at issue here: Why and how do we feel rather than just funct? And I don't know about other explanatory gaps, but the one at issue here is that one. Generalities about multiplicities of foundational gaps, all over the map, don't answer the rather straightforward question of how and why we feel rather than just funct (any more than specific foundational quantum gaps do).

JS: "The cogito only serves as a reminder that the sentence "I do not exist" is not a valid proposition in our language.  It is a reminder of the rules of our grammar, and not a foundation for knowledge."

To repeat, I said nothing about grammar, nothing about foundations of knowledge, nothing particular about language, and nothing even about whether or not I exist. I just said feelings exist, for sure: And then I asked "how and why?"

JS: "how could one doubt that one had a body?"

Same way you can doubt there's a world, causality, reliable induction, other minds. You'd be wrong to conclude they do not exist, because they're all real enough; but there's certainly room for doubt wherever there are no guarantors for certainty. Descartes pointed out the two exceptions. One (necessary truth on pain of contradiction) was no big surprise; but the certainty of feeling (surely the nether pole of the platonic-personal or objective-subjective spectrum!) was a bit of a jolt. And the upshot was the explanatory gap.

JS: "What could such “doubt” consist in...?"

I have no trouble at all distinguishing the (foolish) sceptics who claimed that the world was an illusion, from the wise ones who simply pointed out that there were some truths one could know with certainty and some truths one could only know with probability. Without Descartes, we might wrongly have thought that the mathematical truths were the only ones we could know with certainty.

JS: "Those are empty words, no different than, 'I don’t have a mind . . . I am just a body'...Repeating them does not constitute doubt, because these words have no discernible consequences.  They are insignificant.  It would make as much sense to say, “all logic is invalid . . . there are no valid inferences,” or perhaps, “there are no thoughts, only words; no feelings, only functions.”  Such mantras are not to be taken seriously."

I'm afraid it sounds to me more as if it is you who are repeating mantras without reflecting on the meaning or grounds for what you are saying: Doubting I feel is self-contradictory (if/when I do feel, and I do). Doubting I have a body is not self-contradictory, just false. Doubting things that are provably true on pain of contradiction reduces both affirmation and denial to empty gibberish. 

JS: "...your argument resembles some unconvincing theistic arguments [such as] God’s existence is self-evident by the very fact of knowledge.  Therefore, a person who claims that God does not exist is begging the question against theism and is denying their own knowledge... Do you find the argument compelling?

No more compelling than that "the Great Pumpkin's existence is... etc." It's just arbitrary gibberish. Please see the discussion of Pascal's Wager. The existence of feelings is anchored in our undeniable experience: gods and goblins are arbitrary inventions of feverish imaginations or charlatans.

JS: "Your argument for a functing/feeling dichotomy is similar.  You claim that the unique status of feelings (be it epistemic or ontological or both) is self-evident, and that it is self-evident by the very fact of feeling.  You defend this notion by accusing those who reject it of begging the question and denying their knowledge of feelings. How is your argument different from those theistic arguments?"

Let me ask you, instead, what plays the demonstrative role of the cogito in the case of hobgoblins?

-- SH


2009-05-25
The 'Explanatory Gap'
Reply to Derek Allan
DA: "[1] When I add 2 and 2, or feel angry, or reflect on what I did yesterday, am I merely under some kind of illusion that I am doing these things, [2] and is all that really nothing but neurons moving about in my head? [3] (And how could I even have such a thing as "an illusion" if that too is nothing but moving neurons?)  [4] In short, what meaning does the word 'conscious' have for you, if any?"


[1] When you add or feel angry or reflect on your experience, you are not under any kind of illusion because you are a part of the real world that you live in at the same time as your brain represents the world to you from your privileged egocentric perspective. So if you are really adding 2 and 2 your experience of doing so conforms to what is happening in the real world and is not an illusion. On the other hand, if you are not adding numbers (real world) but only experience/think that you are adding numbers (phenomenal world), then you are indeed under some kind of illusion (or delusion).


[2] This is the wrong way to think about it. The brain mechanism that represents the world for you doesn't work by having neurons moving around. It works by systematic bioelectric and molecular changes in particular kinds of neuronal structures.


[3] Forget about moving neurons. An illusion is a mismatch between a feature of your phenomenal brain representation of something in the real world and the way that something really is. The moon illusion (which troubled Aristotle) is a good example. The moon at the horizon looks much larger than the moon at its zenith, yet the angular size of the moon as it projects to the eye is the same (~0.5 degrees) at the horizon or high above. I have been able to show that the moon illusion (as well as many other classical illusions) is a natural consequence of the structure and dynamics of the brain mechanism that represents our world.


[4] For me, being conscious means having a living brain with an active mechanism that represents the world from a privileged egocentric perspective. Another way to express this is to say that a conscious person is the origin of an internally represented volumetric surround (the world). The content of your phenomenal surround will vary according to the contingencies of your engagement with the world.


.. AT

2009-05-25
The 'Explanatory Gap'
Reply to Jamie Wallace
If a zombie doctor was talking to its zombie patient , the zombie doctor would still need to translate the zombie patient reporting states so it could make the correct diagnosis of which zombie patient module or subsystem was malfunctioning.  So we may say that our own human feelings are the analog equivalent of the zombie produced language which originates as digital information. So feelings, when they occur are actually information.

The zombie patient said to the zombie doctor, "How much do I owe you doc?".
The zombie doctor said, "$5000 dollars".
The zombie patient replied, "$5000 dollars! Since when does a zombie need that much money!".
The Zombie doctor replied,"Even my kids need new software and batteries, ya know!"

2009-05-25
The 'Explanatory Gap'
Victor, you're missing the point. There's no problem with the reporting of inner states. The problem is that we feel them. The same source that generates the feeling (which, we feel, in turn generates the report of the feeling) can just generate the report directly. If not: how and why not?

2009-05-25
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: The brain mechanism that represents the world for you doesn't work by having neurons moving around. It works by systematic bioelectric and molecular changes in particular kinds of neuronal structures.

From a philosophical point of view it amounts to the same thing.

There are too many problems in the remainder for me to address. Eg you say: "a conscious person is the origin of an internally represented volumetric surround (the world)"  So the individual consciousnesses is the origin of the world?   The  'world' is entirely the creation of my consciousness?  Or does it have some independent existence?  And what is 'the world' anyway? My world is certainly not just a ' volumetric surround'.

DA  

2009-05-26
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "There's no problem with the reporting of inner states. The problem is that we feel them. The same source that generates the feeling (which, we feel, in turn generates the report of the feeling) can just generate the report directly. If not: how and why not?"

The same source that generates the feeling cannot generate the report of feeling directly because the brain mechanism that is able generate the feeling is significantly different than the brain mechanisms that are able to generate our reports of feelings/inner states/phenomenal content.


.. AT 





2009-05-26
The 'Explanatory Gap'
Reply to Stevan Harnad
This discussion seems to me to be revolving around the old issues that have haunted the philosophy of mind for millennia:

How can the transfer of impressions create mind? (Aristotle)
If mind exists then how can mind be other than epiphenomenal? (Leibniz, Huxley/James)
Can there be zombies? (Descartes' animals)
Can there be abstract ideas? (Berkeley)
Do I exist because I think? (Descartes)

These diversions are very entertaining but the primary subject of this thread  is  "the explanatory gap" -ie: can physical theory explain mind?

The answer to this question is apparently simple, all we need is a definition of mind and an accurate knowledge of current physical theory. Sadly it is at this stage that the discussion disintegrates.  When considering mind some correspondents declare that mind is little different from the world itself or that mind is some sort of ill defined 'feelings' that cannot be related to ordinary 'functions'.  When considering physical theory most correspondents would rather that we didnt and the obvious truth that if we can't explain the physical world then we can't explain mind in terms of physical theory is dismissed as pure Zen obscurantism.

How do we escape from the philosophical perseveration that leads us to endlessly re-visit the old issues when exploring the explanatory gap?  

I would propose that we look more closely at the physical theory of conscious observation and closely examine the time and space of conscious experience, time and space being essential for linking observation to physical theory. For instance, when exactly did SH have his 'feelings', now or just then? If feelings are always past could they be causal? Where were these feelings, in his head, in his gut or where?


2009-05-26
The 'Explanatory Gap'
JWKMM  :the primary subject of this thread  is  "the explanatory gap" -ie: can physical theory explain mind?"

Yes. And since there are two elements to this equation - physical theory and mind - one presumably needs to know what one means by both of them.  Hence the need to explain what one means by mind - or consciousness.  And hence the impossibility of escaping philosophical analysis of this problem, and, equally, the impossibility of restricting one's purview simply to 'physical theory'.

DA

2009-05-26
The 'Explanatory Gap'
Reply to Arnold Trehub

FEELING, WILLING AND DOING: WE ARE ALL ANOSOGNOSIC CONFABULATORS


AT: "The same source that generates the feeling cannot generate the report of feeling directly because the brain mechanism that is able to generate the feeling is significantly different than the brain mechanisms that are able to generate our reports of feelings..."
I wonder why you would say this, Arnold, since (1) we have no idea how or why brain mechanisms generate, say, felt sensations rather than just sensed sensations and (2) (although it is probably flawed methodologically), the Libet premotor potential data -- which (seem to) show that an unfelt premotor potential precedes both the moving and the feeling that one is voluntarily moving -- suggests (unproblematically) how (2a) a prior unfelt process can cause movement as well as (mysteriously) how (2b) a prior unfelt process can cause the feeling of willing the movement. Brain locus is certainly not a problem in principle. (Nor is locus in itself particularly explanatory, functionally, even for unfelt functions!)
And we all know that once a movement is a fait accompli, the only thing the anosognosic patient can do is confabulate and rationalize it, as in the case of the movement of the split-brain patient's left arm in response to a stimulus in the speaking hemisphere's unseen visual half-field. Restore all the connections -- and hence of course all the correlations -- and you have our ordinary intact anosognosia about the real causes of our movements. (In other words, until and unless the causal role of feeling can be explained, we are all anosognosic confabulators about the causes of our doings!)

Note, by the way, the close relation between the feeling/function problem itself, and the problem of volition, for they are in fact the same problem, the feeling/function problem being a problem about the causal status, hence the causal explanation of feeling: The explanatory gap is a gap in the power of causal explanation to account for feeling.

(Note also how ordinary anglo-saxon gerunds like "doing," "feeling," and "willing" can help keep us honest on these tricky questions -- with the help of the not-so-anglo-saxon gerund "functing"...)

P.S. If anyone looks up the definition of "anosognosia" or "confabulation" on google's hero, wikipedia, instead of google books or google scholar, please be cautious and sceptical about what you "learn": I just checked "confabulation (neural networks)" and found a piece of empty self-puffery. A textbook of neurology or neuropsychology is a more reliable source.


-- SH











2009-05-26
The 'Explanatory Gap'
Reply to Derek Allan
Derek, The real world is not the creation of your consciousness, but your phenomenal world, within which you are at the spatial origin, is your consciousness. The content of your phenomenal volumetric surround can be sparse or rich depending on the contingencies of your engagement with the real world.


.. AT









2009-05-26
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "but your phenomenal world, within which you are at the spatial origin, is your consciousness"

I'm not sure what a "phenomenal world" is. But if the phrase is just a general label for the world of persons,things and events, I really struggle to see how it could be my consciousness.  Is my consciousness the car parked down the street, the war in Afghanistan, the music I was listening to last week, etc etc.? I can imagine an argument that I might be conscious of those things but to say they are my consciousness seems very odd. How (eg) could a car be my consciousness?  

Also, how does the 'real' world differ from the 'phenomenal' world - or are they the same?

DA

2009-05-27
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "The same source that generates the feeling cannot generate the report of feeling directly because the brain mechanism that is able to generate the feeling is significantly different than the brain mechanisms that are able to generate our reports of feelings..."
SH:; "I wonder why you would say this, Arnold, since (1) we have no idea how or why brain mechanisms generate, say, felt sensations rather than just sensedsensations and (2) (although it is probably flawed methodologically), the Libet premotor potential data -- which (seem to) show that an unfelt premotor potential precedes both the moving and the feeling that one is voluntarily moving -- suggests (unproblematically) how (2a) a prior unfelt process can cause movement as well as (mysteriously) how (2b) a prior unfelt process can cause the feeling of willing the movement."

(1) In order to have an idea of how/why brain mechanisms generate felt/conscious sensations rather than unfelt/unconscious sensory events we need to refer to a theoretical model of the cognitive brain which details the neuronal structure and dynamics of the brain mechanism for our global phenomenal content and the brain mechanisms serving our separate sensory modalities. I have spent many years developing such a model. To get an idea of the difference between our mechanism for representing the world and our mechanism for processing sensory data, take a look at these two chapters in my book The Cognitive Brain (MIT Press 1991):


Chapter 3 "Learning, Imagery, Tokens, and Types: The Synaptic Matrix"
Chapter 4 "Modeling the World, Locating the Self, and Selective Attention: The Retinoid system"


(2) This is not at all mysterious when you understand that there is recurrent axonal excitation between the mechanism that represents our global phenomenal world (including selective attention to events in the world) and the mechanisms that serve our separate sensory-motor modalities. The Libet timing data are not problematic when the operational details of the cognitive brain system are taken into account.


These issues are tricky, but I think we can get a better understanding of them if we examine a theoretical model of how the brain might do the job.


.. AT






 







2009-05-27
The 'Explanatory Gap'
Reply to Derek Allan
DA: "Also, how does the 'real' world differ from the 'phenomenal' world - or are they the same?"


Your phenomenal world is the real world as it appears to you. So the car, the music, and the war exist in both the real world and in the phenomenal world of the conscious part of your brain. Notice that each phenomenal event must exist somewhere within your phenomenal space.

.. AT 





2009-05-27
The 'Explanatory Gap'
Reply to Arnold Trehub
AT:  "Your phenomenal world is the real world as it appears to you. So the car, the music, and the war exist in both the real world and in the phenomenal world of the conscious ... (expand) part of your brain."

So what form does this 'phenomenal world' take?  Is it a material thing inside my head?  If not, what?

DA

2009-05-27
The 'Explanatory Gap'
Reply to Arnold Trehub

PROCESSES DON'T BECOME FELT BY FIAT


AT:  "...to have an idea of how/why brain mechanisms generate felt/conscious sensations rather than unfelt/unconscious sensory events we need to refer to a theoretical model of... the brain mechanism for our global phenomenal content and the brain mechanisms serving our separate sensory modalities."
My guess: The circularity comes with the "global phenomenal content": How/why is "global content" felt content? 
AT: ['how... a prior unfelt process can cause the feeling of willing the movement'] is not at all mysterious when you understand that there is recurrent axonal excitation between the mechanism that represents our global phenomenal world (including selective attention to events in the world) and the mechanisms that serve our separate sensory-motor modalities."
How does "recurrent axonal excitation" explain how/why "global" content, or selective (or unselective) attention become felt content, and felt attention? Neural structures and processes do not become felt by fiat. And correlation is not, nor does it explain, causation.


-- SH


2009-05-28
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "PROCESSES DON'T BECOME FELT BY FIAT"

Stevan, I've proposed that brain activity that represents the world from a privileged egocentric perspective IS our global phenomenal content (felt content). You then continue to ask: How/why does this global content CAUSE feeling? It seems to me that if my theoretical premise is that this particular brain activity is the same thing as feeling, your question is a non sequitur. It's like asking why F causes F after the identity F = F. Your repetition of the how/why question with regard to feeling suggests that there has to be something more than a biophysical explanation of feelings --- something extra that a mere physical explanation cannot account for. Perhaps you're a closet dualist --- or nagged by an unacknowledged belief that feeling just cannot be no more than a physical event. If you refuse to evaluate a biophysical explanation of consciousness on its own terms, then you will continue to repeat your question.


In his article in The Cognitive Neurosciences III (MIT Press 2004), Chalmers wrote: 


"The possibility of such [neural] principles holds out the tantalizing prospect that eventually, we might use them to predict features of an organisms subjective experience based on knowledge of its physiology.


The operating principles of the brain's putative retinoid system have enabled the prediction of detailed features of subjective experience in human subjects (see references). So Chalmer's prospect has, in fact, been realized.


References

Trehub (1991). The Cognitive Brain. MIT Press.
Trehub (2007). Space, self, and the theater of consciousness. Consciousness and Cognition.
  
.. AT












2009-05-28
The 'Explanatory Gap'
Reply to Derek Allan
SH: "So what form does this 'phenomenal world' take?  Is it a material thing inside my head?"

Yes, your phenomenal world is (roughly speaking) is something physical/material inside your head, or more specifically inside your brain.


.. AT

2009-05-28
The 'Explanatory Gap'
Reply to Arnold Trehub
SH: "Yes, your phenomenal world is (roughly speaking) is something physical/material inside your head, or more specifically inside your brain."

Amazing! So there are motor cars, wars in Afghanistan, dogs, cats, and all sorts of other objects large and small physically there inside my brain!

DA

2009-05-28
The 'Explanatory Gap'
Reply to Arnold Trehub

ASK A SIMPLE QUESTION...


AT: "I've proposed that brain activity that represents the world from a privileged egocentric perspective IS our... (felt content)"
Well, that would be a quick solution to the feeling/function problem: just propose that a bit of function IS feeling. Then there's no more hows and whys about it! 

But, apart from your proposing that it is so, how and why is it so? As far as I know, brain activity is just brain activity, i.e., function is just function. And the question on the table was, and continues to be: How is (some of) it felt? Why is it felt? 

"Because I have proposed it" is alas not an answer! 

(Nor, by the way, is the fact that the brain activity "represents the world from a privileged egocentric perspective" an explanation. Adaptively (i.e., functionally) speaking, there is a lot to be said for "representing the world from a privileged egocentric perspective" -- but how and why is that "representation from a privileged egocentric perspective" a felt "representation from a privileged egocentric perspective" rather than just a functed "representation from a privileged egocentric perspective"?)
AT: "if my theoretical premise is that this particular brain activity is the same thing as feeling, your question is a non sequitur."
But how/why questions are not answered by proposing theoretical premises: they are answered by explaining how and why. You are just begging the question with a solution by fiat.
AT: "Your repetition of the how/why question with regard to feeling suggests that there has to be something more than a biophysical explanation of feelings"
A biophysical explanation can answer a biophysical question. I asked how and why the biophysics is felt biophysics. It is not an answer to say that feeling just IS biophysics (because I propose that it is so). Even if your proposal is somehow true, the question is how and why is it true. How, and why is that biophysics felt biophysics, rather than just (the usual) functed biophysics? (As far as I know, all you offer by way of an answer is correlations. Well if your proposal is true, there will certainly have to be those correlations; but the correlations certainly don't explain how and why your proposal is true. They are part of what needs to be explained.
AT: "If you refuse to evaluate a biophysical explanation of [feeling] on its own terms, then you will continue to repeat your question."
But I did not hear a biophysical explanation of feelings, and that is why I continue to repeat my question. All I heard was a proposal that that biophysics just IS feeling, somehow. An explanation is supposed to tell me how and why X just IS feelings. Neither your proposal -- nor the (familiar) correlation itself -- is an explanation at all.
AT: "Perhaps you're a closet dualist..."
Not at all. I'm sure the brain causes feelings, somehow. I'm just asking how (and especially why), since felt functing -- precisely because telekinetic dualism is false -- seems utterly superfluous, functionally (i.e., causally): Just functed functing looks like it would do the very same job, exactly as well. (If not, then please explain how and why not: It's the same question either way!)

It's not a trick. And I am not just a compulsive or perverse repeater of the question "how/why". There is really an explanatory gap here, and it is not filled by merely proposing that functing that is correlated with feeling just IS feeling. It is filled by explaining how and why it is feeling.

And note that my insistence on putting and keeping the focus on feeling itself (rather than on equivocations such as seeing, knowing, representing, perspective, or ego, all of which -- if unfelt -- have exactly the same functionality) is intentional: to keep us honest, and to make and keep it crystal clear exactly what the real problem is (and always has been). 
And to make it harder to keep begging the question...


-- SH


 


2009-05-28
The 'Explanatory Gap'
Reply to Derek Allan
To draw an analogy between consciousness and a pen, the explanatory gap is like saying that the pen can write on anything except itself (just as consciousness can provide a physical explanation for anything except itself). [Could this be a type of self-referential paradox, such as the Barber or Russell's??] I would question the need for the pen to write upon itself. It is the instrument with which we write. Taking the analogy back to its origin, for what purpose do we need a physical explanation for consciousness? It is the "instrument" with which "we" communicate physical explanations (and without which there could be no communication).

DA: "Surely, before anything, the so-called explanatory gap is to explain what (human) feeling, thought, experience - in a word, consciousness - is."

I don't see it as a problem of definition. Basically, consciousness is what is lacking in someone who is unconscious (where, e.g., the person is in a coma, is under general anesthetic, is asleep; where the person is alive but temporarily uncommunicative).

DA: "Just take one bite of this: what do we mean by a 'feeling?'   What are the elements of a (human) 'feeling"?  What are not? How does it differ from a thought?  What, for that matter, is a thought?  None of this is remotely straightforward or self-evident."

What could be more self-evident than one's conscious, experiential states? But (in case that question leads astray) elements of consciousness need not be presented as evidence to the self; those elements constitute the self, which is the ground of our judgments upon evidence or explanations. Therefore, what does 'evidence' (of a feeling/perception/thought) mean here, i.e., to whom could/would evidence of subjective experience be presented? And if it could not be presented as evidence to anyone, including itself, then how is subjective experience supposed to enter into our physical explanations?

Subjective experience is the basis for the communication of physical explanations. Thus, there is no need (or, perhaps, possibility) to include subjective experience in a physical explanation. Unless, perhaps, the physical explanation is of a subjective experience. But here the experience can only play the role of the thing to be explained, with the explanation being (presumably) given entirely in causal, physical terms of brain function.   

DA: "If anyone thinks it is, I invite them to give me a definition of - eg - a 'feeling'.  I can virtually guarantee it will have huge holes in it. So if we can't even say what a feeling is (I certainly have huge difficulty), don't we obviously have a problem at a much more basic level than all this talk of cause-effect, etc etc? Cause and effect between what and what?"

If you don't know what a feeling is, then how could you understand the problem of the explanatory gap? If you do know what a feeling is, then for what purpose do you need it to be (better?) defined in language?

"296. "Yes, but there is something there all the same accompanying my cry of pain. And it is on account of that that I utter it. And this something is what is important--and frightful."--Only whom are we informing of this? And on what occasion?"
(Wittgenstein, Philosophical Investigations.)

LC


2009-05-28
The 'Explanatory Gap'
Reply to Derek Allan
DA: "Amazing! So there are motor cars, wars in Afghanistan, dogs, cats, and all sorts of other objects large and small physically there inside my brain!"


All of these objects and events are out there in the real world. What is amazing is that you can have neuronal representations of all of these and more inside your brain. What do you think guides the brush of an artist?


.. AT

2009-05-29
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "What is amazing is that you can have neuronal representations of all of these and more inside your brain."

Oh, I see. It's not actually a world inside the brain as you seemed to be saying ("Yes, your phenomenal world is (roughly speaking) something physical/material inside your head, or more specifically inside your brain."") It's actually a representation of this world.

So is this representation a "physical/material" thing? If, by some ingenious surgery, we could open up someone's brain while they were conscious and seeing their "phenomenal world", would we see these representations - even if we needed a microscope?  Would we see, for example, little pictures on a sort of cinema screen in the brain?  If not, what would we see?
 
DA

2009-05-29
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "I've proposed that brain activity that represents the world from a privileged egocentric perspective IS our... (felt content)"
SH: "... just propose that a bit of function IS feeling. Then there's no more hows and whys about it!

Not at all. If the structural and dynamic properties of the brain mechanism that is the source of the relevant function (feeling) are specified, then many hows and whys about particular instances of feeling can be answered. That's the point of a scientific theory.


SH: "As far as I know, brain activity is just brain activity, i.e., function is just function. And the question on the table was, and continues to be: How is (some of) it felt? Why is it felt?"

Stevan, it seems to me that you are asking for an explanation of the very existence of feeling. As I have said before, we can't explain the sheer existence of consciousness/feeling just as science can't explain the sheer existence of space-time. But I believe we can explain the content of consciousness/feeling just as science can explain the content of space-time. (With the understanding, of course, that all scientific explanation is provisional.) 


SH: [Earlier in this thread] "Feeling exists as surely as gravity does ... but there the resemblance ends, because feelings can have no causal power ..."

If feelings are the activity of a brain mechanism then feeling can certainly have causal power. Within the framework of the brain mechanism for phenomenal content that I have proposed (the retinoid system), the causal power of feeling has been explicated, tested, and supported.


SH: "But how/why questions are not answered by proposing theoretical premises: they are answered by explaining how and why. You are just begging the question with a solution by fiat."

Is proposing a theory committing an act of fiat? My solution is not simply a theoretical premise. The how and why of phenomenal content are explained by the detailed neuronal structure and dynamics of the theoretical model.
 

SH: "Just functed functing looks like it would do the very same job, exactly as well."

I confess that I have no idea what functed functing is.


.. AT










 

 







2009-05-29
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "I confess that I have no idea what functed functing is."
Okay, here's an example (deliberately simplified to just the core essentials): You have tissue injury. You have nocicepetion, which detects the injury and generates a withdrawal and avoidance of the nociceptive stimulus that caused the tissue injury. That's fine, and perfectly adaptive, and perfectly functional. But we all know that's not the whole story. If it were the whole story, it would just be functing. We also feel the nociception, in the form of the pain; we don't just funct it, as I first described it. That's no longer just functed functing, it's felt functing. And that's what generates the feeling/function problem, the how/why question, and the explanatory gap. For not only is it not at all clear how the nociception generates the feeling of pain, rather than just generating the functional state that leads to doing the useful things we do when we feel pain (including all the complicated cognitive planning); but it is even less clear why this functing is felt: the feeling itself seems to serve no additional purpose at all.

And I don't think that declaring "it's a 'given' that certain functions are felt, just as it is a 'given' that gravity pulls" is an answer. It simply begs the question, a very reasonable and natural how/why question of the kind whose answer -- in all other areas, but not in this special case -- is eventually discovered (or there's no reason to think it can't or won't be). Here, in contrast, there are unique reasons to believe it never will be.


-- SH

2009-05-29
The 'Explanatory Gap'
Reply to Luke Culpitt
LC: "I don't see it as a problem of definition. Basically, consciousness is what is lacking in someone who is unconscious (where, e.g., the person is in a coma, is under general anesthetic, is asleep; where the person is alive but temporarily uncommunicative)."

Comatose states are simply states where we say that consciousness is "not there". (Where has it gone, I wonder....? )  They tell us absolutely nothing about what consciousnesses is. The idea that studying people in 'vegetative states' is useful for understanding consciousness (which, I note, some in analytic philosophy seem to do) suggests, to my mind, that the basic problem at stake has not even been carefully formulated.

LC: "What could be more self-evident than one's conscious, experiential states?"

What does 'self-evident' mean here? It is relatively self-evident to me that I am conscious (by which I mean nothing more than I am not, as far as I can tell, dead). But beyond that (which is almost nothing) I have no idea whatsoever what (human) consciousness is. Can you say? 

LC: "If you don't know what a feeling is, then how could you understand the problem of the explanatory gap? "

I don't understand "the problem of the explanatory gap". Or at least, as I have said several times, I think the phrase is a misnomer, a red herring, a comforting illusion.

DA






2009-05-30
The 'Explanatory Gap'
Reply to Derek Allan
DA: "If, by some ingenious surgery, we could open up someone's brain while they were conscious and seeing their "phenomenal world", would we see these representations - even if we needed a microscope?  Would we see, for example, little pictures on a sort of cinema screen in the brain?  If not, what would we see?

First, I would not say that we see our phenomenal world. The phenomenal world is our conscious experience. We have, in fact, opened up the brains of people while they were conscious (for medical reasons). What we see are neurons, glia, and other biological stuff. We don't see little pictures because the conscious brain contains biophysical representations (spatiotopic analogs) of the real world, not little copies of what's out there. 


.. AT



2009-05-30
The 'Explanatory Gap'
Reply to Arnold Trehub
AT:  "First, I would not say that we see our phenomenal world. The phenomenal world is our conscious experience."

But I thought you said the phenomenal world is the real world "as it appears to us".  How can it "appear" to us if we do not see it?

AT: "What we see are neurons, glia, and other biological stuff. We don't see little pictures because the conscious brain contains biophysical representations (spatiotopic analogs) of the real world, not little copies of what's out there. "

So if we can't see little pictures inside the brain, can we at least see these 'analog' things - say, with a powerful microscope? For example, what does the analog of an aircraft carrier look like? (Given the idea of an analogy, I guess it must resemble an aircraft carrier in some way - but rather smaller, of course.)

DA  

2009-05-31
The 'Explanatory Gap'
Reply to Derek Allan
AT"First, I would not say that we see our phenomenal world. The phenomenal world is our conscious experience."

DA: "But I thought you said the phenomenal world is the real world "as it appears to us".  How can it "appear" to us if we do not see it?"

When I use the phrase "as it appears to us", I use it in the sense of a phenomenal quality, i.e., a quality not directly seen but consciously experienced. To avoid ambiguity, I can say that the phenomenal world is the real world as it is represented in our brain. The point is that we do not see our representation of the real world --- we have a conscious/phenomenal experience of it.

You can get a better idea of this if you perform the following simple experiment: Take a 8"X12"  sheet of light gray poster-board and draw a 1" solid black square about 3 inches from the top. Look at the square at normal reading distance for about a minute. This will produce a negative after-image on your retinas. Now look at the lower half of the poster-board and you will experience an image of a bright square on the gray sheet where no image actually exists. If you move the poster-board farther away from you, the bright square grows larger. If you bring the poster-board closer, the bright square shrinks in size. None of these subjective changes in the size of the square are happening in your outside visible world! They are happening within the brain mechanism that generates your phenomenal content.


AT: "What we see are neurons, glia, and other biological stuff. We don't see little pictures because the conscious brain contains biophysical representations (spatiotopic analogs) of the real world, not little copies of what's out there. "

DA: "So if we can't see little pictures inside the brain, can we at least see these 'analog' things - say, with a powerful microscope? For example, what does the analog of an aircraft carrier look like? (Given the idea of an analogy, I guess it must resemble an aircraft carrier in some way - but rather smaller, of course.)

We do not currently have the technical means to see the multi-neuronal mechanisms that generate these internal analogs, so we have to formulate biologically plausible theoretical models that are competent to do the job. For example, the hypothesized structure and dynamics of the retinoid model generate analogs of the effects you experienced in the after-image experiment. This is one piece of evidence that the theoretical model can be taken as an explanation of phenomenal content.

..AT




 




  

2009-05-31
The 'Explanatory Gap'
Reply to Stevan Harnad

Stevan,

Replacing “functing” with “doing” does not clarify anything for me,  as I will explain.  I find it ironic that you accuse those who do not adopt your feeling/functing distinction of begging the question.  In addition to remaining incoherent, your distinction begs the question against a functional explanation of feelings.

An Uncommon Language

SH:  “Feelings are there, being felt (when they are being felt).”

Wouldn’t we say that the brain (or whatever feels feelings) would not be the same had feelings not existed?

Consider: A doctor asks, “can you feel this?”  Your answer indicates whether or not you felt anything—it is a response to a feeling.  This is how we use the language.

The language of feeling is a way of speaking about our behavior as being a reaction to internal states, and a way of speaking about our internal states as reactions to external events—ways of speaking which do not necessarily have well-defined rules.  (And why should they have well-defined rules?)  Exact or not, the language implies causality.

Inexplicably, you say feelings have no causal role to play.  Whatever you mean by the word “feelings,” then, it is not what is commonly meant by the term.

JS:  “"Your view makes all talk of feelings superfluous, including the claim that there is a feeling/functing distinction."

SH:  “No. It just points out that how and why we feel is unexplained”

I don’t see how that is so.  If feelings have no consequences for anything, then any valid results of our discourse are valid regardless of whether or not feelings exist.  To appeal to feelings—to talk of them at all—is superfluous.  So, not only do I not know what you mean by the term “feelings,” I do not see how you could justify postulating them.

SH:  “Feeling is essentially a "two-part relation": Whenever there are feelings, the feelings are being felt. So it is intrinsic to a feeling that there is both feeling and "feeler."”

I’d say it’s a three-part relation:  There’s a feeling, a feeler, and an object/event which the feeling represents.  You are treating the represented object as though it were the feeling.  Feelings do not represent themselves.  (Self-representation occurs on a higher level of abstraction, through language:  for example, in a conversation or a court of law—and here what is represented is not a feeling.)

Please take note: The fact that feelings represent implies that feelings perform a function.  They do work.  So opposing them to “functing” (or “doing”) does not make sense.


Categories of Feeling

You accused me of “complementing the wrong category” because I argued that “feeling something” is a complemented category.  I was responding to a post in which you repeatedly claimed that “feeling something” had no complement.

For example, you wrote (post #975)"‘feeling something’ is not a complemented category, because we do not and cannot know what it feels like to feel nothing at all.”

Your last post seems to be inconsistent with this and similar comments made in that earlier post.  I am inclined to think that you forgot about your earlier post.  Have you another explanation for this inconsistency?

In any case, your inconsistency only strengthens my belief that your position is incoherent.

To repeat:  there is no feeling of feeling something, because the category of “feeling something” does not pick out a specific feel.  It represents feelings in general; but it does not feel like what it represents.  It does not represent its own feeling.  It does not refer to a specific feeling at all.  Thus, there is nothing it is like to feel something:  “what it is like to feel something” is just as empty as its complement, “what it is like not to feel anything.”

For example, somebody asks you, “what would it feel like to be a rock?”

You could respond, “You aren’t making any sense!”  Or you could say, “It wouldn’t feel like anything.  Rocks don’t feel.”  And wouldn’t this second answer make sense?

It wouldn’t tell you anything about what it is like to be a rock (because there isn’t anything it is like to be a rock).  But then, saying “I know what it feels like to feel” doesn’t tell you anything about feeling, either (because there isn’t anything it is like to feel).  How might you answer the question, “what does it feel like to feel?” 

We can use these categories, and doing so even makes sense; but we are making a mistake if we think we are thereby referring to anything.

SH:  “If the only sense-modality were vision, and the only experience were to see shapes, and all shapes were colored -- counting black as a color -- then the subordinate category "red" would be complemented by anything non-red, but the superordinate category "colored" would be uncomplemented.”

 “Colored” would also be complemented by the category “shaped,” because the same shapes would be recognizable as such despite having different colors.  So, the question, “how is that shape colored?” would make sense.  The category “colored” could also mean “having a variety of colors,” and could thus be complemented by the category “monochromatic.”

SH:  “You're simply repeating, I think, your conviction that in complementing subcategories of a category against other subcategories of a category, we are somehow also complementing the category as a whole, against its own complement.”

I was rather demonstrating that your account of the positive category “feeling something” can also be used to explain its complement.  What I probably should have pointed out, though, is that there is a problem with your account.

You say our concept of “feeling something” is established by our knowledge of an invariant feeling present in all feelings.  This requires that all of our feelings are known as particular feelings before we can have the category “feeling something.”  Yet, there is no knowledge of particulars without general categories.  (The notion of a particular is the notion of an instance of a universal.)  The category of “feeling something” cannot come later.


Family Resemblance

SH:  “"What it feels like to be a bachelor" picks out what every waking minute feels like (to a human male) from birth to the first minute one gets married . . .”

I doubt you will find many people willing to adopt that definition.  If you asked an adult, “what does it feel like to be a bachelor?”, would you expect him to expound upon random childhood memories?

SH:  “It does make sense to say "I know what it feels like to be a bachelor."”

I never said it didn’t.  The point is that you are wrongly inferring from the sense of this statement that it must refer to something.  You think that, because we can use statements like “what it feels like to be a bachelor,” they must pick out some feel or idea—some category which was already there ahead of time, just waiting to be revealed.  But “what it feels like to be a bachelor” does not refer to anything in particular—though, if we wanted to, we could define a referent here.  But in so doing we would be drawing a definition, and not revealing one that was already there.  This is the point of Wittgenstein’s notion of family resemblance.

“Feeling” is also a family resemblance concept.  We learn the word “feeling” based on indirect observations—on distinguishing emotional or mental reactions as such.  We also use the term “feeling” to refer to observation in general, but we have no general criterion for what that means.  Why say that, when we observe, all of our observations contain a unique quality, an invariant aspect which is common to all observations?  What would that be?

You might be tempted to say that the self, the ego, is the invariant entity which all of our observations contain.  But how could we observe our own “I” as an aspect of an observation of something else?

Try to observe yourself looking at something—say, a table.  To do this, you might be tempted to say something to yourself, such as, “I am looking at a table.”  But saying is not the same as observing.  So don’t form words in your head.  You might find yourself noticing your body . . . and that helps you remember that you are in the world along with the table.  But it does not show you you-looking-at-the-table.  There is only the observation of the table and your ability to talk about yourself as the observer.  This suggests that the notion of an observer is constructed with language; it is a grammatical convention, a way of speaking (or, as Wittgenstein would say, a way of life.).


Reading Descartes

Saying we need to “read between the lines” is all well and good, but it does not justify the explicit rejection of significant portions of Descartes’ argument and conclusion. 

All judgments, even logical and mathematics ones, were doubtable for Descartes (See Meditations, Meditation 1, section 9; and Discourse on Method, Part 4).  He explicitly concluded that the very first certainty was the cogito.  So I wonder, on what basis do you question these points?

I think Descartes’ decision to doubt mathematical judgments was well-considered.  You had to learn mathematics, and it is conceivable that you learned it all incorrectly.  What if what you have always thought of as “correct answers” were mistakes?  What if mathematical demonstrations didn’t mean what you thought they meant, and there were no “correct answers” to begin with?  (How is a criterion of correctness established?)

Also, I remain weary of replacing cogito with sentio.  For one thing, there is your problematic treatment of the term “feelings.”  Furthermore, I think Descartes chose “thinking” (cogitare) over “feeling” (sentire) for a reason.  For a compelling account, see Jaakko Hintikka’s "Cogito, Ergo Sum: Inference Or Performance?" (1962).

What Does the Cogito Demonstrate?

When I asked how your argument here differs from the theistic arguments I mentioned, you appealed to the demonstrative force of the cogito.  You say the cogito demonstrates that there is an indubitable and invariant feeling of feelings.  Even if this were true (and I doubt it is), it would not demonstrate that feelings lacked causal efficacy.  It is thus unclear how the cogito could support your feeling/functing distinction.

The cogito indicates a logical inference that we should never have the occasion to use.  If you say “I think, therefore I am” to yourself, you are no more convinced of your existence than you had been previously.  For how could saying something demonstrate anything to you, beyond the existence of the statement itself?  (And what if you misinterpreted it?)

As I noted earlier, when we want to observe ourselves as observers, we are tempted to formulate words, such as “I am looking at this.”  The inner sensation of language seems to tell us something.  What it suggests, as I said, is that the I exists only in so far as there is a particular sort of thinking—specifically, the kind of thinking which utilizes certain grammatical forms.  The mistake is in thinking that grammatical forms always refer to specific things.

Descartes noticed that the act of thinking “I am not thinking” implied that he was thinking.  And he concluded that only via such action did he exist as such.  Yet, he misinterpreted the nature of the action.  He believed the word “I” had to refer to something, and since everything outside of the act of thinking was dubitable, he postulated himself as “pure consciousness.”  If we wanted to interpret “pure consciousness” here, we might regard it as “thing which uses grammar.”  Any other interpretation could seem extravagant.

What is demonstrated by the cogito is only the grammar of the “I,” and any “certainty” here resides in the habitual use of the language.

I do not see how the cogito demonstrates anything that could support any of your claims about feelings.  And so my question remains:  How is your argument for a feeling/functing distinction different from the theistic arguments I mentioned?


On Doubt and Certainty

Descartes believed that the act of thinking “I am not thinking” produces an awareness of a contradiction, what Hintikka calls an “existential inconsistency.”  But this is only to say that the utterance of the sentence “I am not thinking” feels contradictory.  And how is the correct interpretation of that feeling established?

Couldn’t one doubt the feeling of a contradiction?  Couldn’t one doubt one’s grammar?

The question is, how is the validity of a grammatical construction demonstrated?  What can be demonstrated is only whether or not one’s behavior is in accordance with the rule.  But this does not mean that the rule is necessary.

A demonstration is only possible if there is some occasion of doubt.  If I cannot doubt that I exist, or that I think, or that I feel, then I cannot demonstrate to myself that I exist, or think, or feel.  So, to say that you cannot doubt that you think or feel means that your thinking and feeling are not in question.  As I noted much earlier in this discussion, if you really doubted that you felt anything, a pinch on the cheek wouldn’t prove anything to you.

You think you can doubt that you have a body, but not that you have a mind.  This, I maintain, is impossible.  What you might argue is that you are mistaken about the nature of your body; and, indeed, you could get any number of facts about your body wrong.  But nothing you could do would amount to doubting that you had a body.

Nor could anyone else doubt that you have a body.  They might doubt that Stevan Harnad exists, or that Stevan Harnad is alive, but that is not the same thing.  (You can also doubt that Stevan Harnad exists, because you can doubt that you are Stevan Harnad.)  Statements addressed using the pronoun “you” are addressed to a cognizant person (rightly or wrongly), or they are spurious.  Thus, the statement “you do not exist” invites just as much contradiction as “I do not exist,” and saying “you do not have a mind” is just as meaningful as saying “I do not have a mind.”  (Hintikka suggests a similar point in the article I mentioned earlier.)

2009-05-31
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: " When I use the phrase "as it appears to us", I use it in the sense of a phenomenal quality, i.e., a quality not directly seen but consciously experienced."

So we seem to have come full circle. Consciousness was, as I understood it, what we were trying to explain.  You are now using it as part of the explanation...


 AT: "We do not currently have the technical means to see the multi-neuronal mechanisms that generate these internal analogs, so we have to formulate biologically plausible theoretical models that are competent to do the job."

But it's really not a question of "technical means" is it? It's a question of a basic incoherence in the argument. 

Your claim, as I understood it, was that consciousness can be explained completely in physical terms. First, you said that what you call "the phenomenal world" is actually in the brain. When I pointed out that this hardly made sense, you said that it was not actually the world but "representations" of it that were in the brain. When I said: so where are these representations, can we see them? you said, well, no, not really, because they are not really representations but "analogs" of them. When I now ask what these "analogs" look like in the brain, they suddenly turn into "multi-neuronal mechanisms".

The point surely is that, as you yourself said at one point, all we will ever see in the brain are "neurons, glia, and other biological stuff".  So there is a fundamental difference in nature - is there not? - between what consciousness seems to be - e.g what you call "representations" of the world - and what the brain is. The brain is "neurons, glia, and other biological stuff"; consciousness is something else entirely.

And note that this problem has emerged in our discussion of what is probably one of the the most basic operations of consciousness - mere perception of present objects.  How much more thorny is it going to be once we start talking about what seem to be more complex operations, such as recall of past events, thinking about future possibilities, reflection on abstract ideas etc. 

I have no problem at all with the proposition that consciousness is accompanied by or facilitated by changes in physical brain states. But to say that consciousness simply is brain states - and can be explained in purely physical terms, as you maintain - seems to me quite untenable.  I realize that such 'physicalist' claims are quite common today - especially in parts of what is called "analytic" philosophy - but to my mind they are nothing short of philosophical nonsense. Indeed, obviously so.

DA

2009-05-31
The 'Explanatory Gap'
Reply to Derek Allan
DA: "Your claim, as I understood it, was that consciousness can be explained completely in physical terms."

No, I didn't claim that consciousness can be explained completely in physical terms. If you recall, I distinguished between two aspects of consciousness, (1) the sheer existence of consciousness, and (2) the content of consciousness (phenomenal content). I was careful to point out that the sheer existence of consciousness can not be explained by science, just as, for example, the sheer existence of space-time or the fundamental forces can not be explained by science. So consciousness like space-time is a theoretical primitive in the conceptual framework of science. On the other hand, I claimed that the content of consciousness (phenomenal content) can be explained in biophysical terms. In support of this claim, I have proposed a detailed theoretical model of a brain mechanism (the retinoid system) that has been shown to explain many previously unexplained phenomenal experiences, and also successfully predict novel subjective phenomena.


DA: "So there is a fundamental difference in nature - is there not? - between what consciousness seems to be - e.g what you call "representations" of the world - and what the brain is. The brain is "neurons, glia, and other biological stuff"; consciousness is something else entirely."

I don't question that what consciousness "seems to be" is different than what the brain is. This is the root of your inability to accept the notion that your phenomenal experience is constituted by the operation of a particular kind of biophysical mechanism within your brain. It is a problem of your belief about the fundamental nature of conscious content (e.g., that it cannot possibly be the activity of neuronal mechanisms), not a problem with the overwhelming weight of evidence supporting the conclusion that phenomenal content is generated by a particular kind of brain mechanism which represents the world from a privileged egocentric perspective.


DA: "But to say that consciousness simply is brain states - and can be explained in purely physical terms, as you maintain - seems to me quite untenable.  I realize that such 'physicalist' claims are quite common today - especially in parts of what is called "analytic" philosophy - but to my mind they are nothing short of philosophical nonsense. Indeed, obviously so."

Such claims can hardly be called "philosophical nonsense" because they are well supported by empirical and logical evidence. What seems obvious to you, I suggest, is based on deeply held prejudice and a denial of abundant relevant evidence. Did you try the after-image experiment? If you did, what do you think caused the changes you experienced? If you didn't, it seems to me that you are unwilling to consider evidence that might change your mind.

..AT 





















2009-05-31
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "No, I didn't claim that consciousness can be explained completely in physical terms. .... On the other hand, I claimed that the content of consciousness (phenomenal content) can be explained in biophysical terms."

You have been using this word "phenomenal" a lot and it sounds impressive. But what do you mean by it?  What exactly do you mean by the "phenomenal content of consciousness"? (I suspect we are going to go over ground we have already covered here. E.g. are there aircraft carriers, or representations of them, or 'analogs' of them, etc inside the brain etc.)

AT: " This is the root of your inability to accept the notion that your phenomenal experience is constituted by the operation of a particular kind of biophysical mechanism within your brain."

There is a clear equivocation here, especially in your phrase "constituted by the operation". Few people (myself included) probably doubt that the "operation of a particular kind of biophysical mechanism within the brain" make consciousness possible (ie "constitute" in that sense). But it is a very different matter to say, as you seem to want to, that these "operations" are consciousness ("constitute" in that sense). Which of these is your claim exactly?

AT: "Such claims can hardly be called 'philosophical nonsense' because they are well supported by empirical and logical evidence."

Evidence for what?  The claim I dismissed as nonsense is that "consciousness simply is brain states." Do you have "empirical and logical evidence" for that?

The root of a lot of these problems, in my view, is that the term "consciousness" is so often treated as if it were clear and self-explanatory. It is anything but. It is an extremely mysterious idea. I started reading a paper recently which began: "There is nothing that we know more intimately than conscious experience but there is nothing that is harder to explain." I think the first part of this statement is one of the grand illusions of contemporary (especially analytic) philosophy. "Conscious" is one of the words we use to describe the nature of human experience, and since we are all "conscious" we assume we must "know it".  We don't.  Not intimately. Not even roughly.  And that fact leaps to the eye as soon as someone starts trying to describe what consciousness is. We are offered all kinds of weak synonyms like "feeling", "experience", "phenomenal" this, that and the other" etc etc. Or else we get feeble formulae like Nagel's that consciousness is "to be like" something (or whatever it is exactly). None of this helps one iota. It is simply exchanging one term for another.

Personally, I don't believe philosophy will get anywhere at all with the question of consciousness until it starts with the frank admission that, far from "knowing" it intimately, it knows almost nothing at all about it - and, in particular, that it is extremely hard put even to say what consciousness is.  


DA
 




2009-05-31
The 'Explanatory Gap'
Reply to Derek Allan
DA: Personally, I don't believe philosophy will get anywhere at all with the question of consciousness until it starts with the frank admission that, far from "knowing" it intimately, it knows almost nothing at all about it - and, in particular, that it is extremely hard put even to say what consciousness is.
That conflates experience with intellectual understanding. All of us have the former, but there's little if any consensus on the latter: it is extremely hard to agree what consciousness is. On the other hand, some of us think it's quite easy to say what it is, though even those who agree on that will tend to differ on the particulars.

For what it's worth, here's my tuppence-worth: consciousness is nothing more nor less than a point of view. What specifies a particular point of view is what can be viewed from that point. Every view point has a geographical location, but in addition to that, a particular instance of consciousness at a particular time is specified by physiological and psychological factors. But these are just additional constraints, there's nothing metaphysically special about we conscious entities, it only seems so because we're thinking and talking about ourselves. (Outside of metaphysics, of course, there's a great deal that's special about us, both collectively and individually.)

2009-06-01
The 'Explanatory Gap'

Robin, Thanks for your comment. Here's my reply.

RF: "That conflates experience with intellectual understanding."

But 'intellectual understanding' is surely what philosophy is about, isn't it?  You are, in a sense, just restating my point. Yes, we 'experience' consciousness (vaguely speaking), but it does not follow from that that we understand it. And understanding it is precisely what the philosophical endeavor in this area is all about, isn't it?  (If it isn't, I'm not sure what it could be about!) 


RF: "For what it's worth, here's my tuppence-worth: consciousness is nothing more nor less than a point of view...."

But this surely won't do, will it? To begin with, it would not allow us to distinguish between animal and human consciousness (assuming we can sensibly apply the same term to both). A cat stalking a bird has a 'point of view' (I imagine). 

Second, having a 'point of view' is surely just one of the possible experiences of a conscious person - along with not having a point of view, thinking about something, not thinking about anything in particular, feeling passionate about something, feeling apathetic, looking forward to tomorrow, not caring if tomorrow never comes, and so on. These are among the umpteen things a conscious person can do, but none of them tells us what (human) consciousness is.

DA

2009-06-01
The 'Explanatory Gap'
Reply to Derek Allan
DA: "What exactly do you mean by the "phenomenal content of consciousness"?"

By "phenomenal content" I mean what you are aware of when you are either awake or dreaming. For example, as you read this comment you have a phenomenal experience of a screen in front of you containing a printed message that you might want to critique and perhaps type a response.


DA: "Few people (myself included) probably doubt that the "operation of a particular kind of biophysical mechanism within the brain" make consciousness possible (ie "constitute" in that sense). But it is a very different matter to say, as you seem to want to, that these "operations" are consciousness ("constitute" in that sense). Which of these is your claim exactly?"

My claim is that the biophysical activity of the brain mechanism that represents the world from a privileged egocentric perspective (the retinoid system) is consciousness/phenomenal content.


DA: "The claim I dismissed as nonsense is that "consciousness simply is brain states." Do you have "empirical and logical evidence" for that?"

Indeed I do have such evidence. Read Trehub (2007) Space, self, and the theater of consciousness. Consciousness and Cognition

You agree that "the operation of a particular kind of brain mechanism makes consciousness possible", but you think it is absurd to claim that this brain activity is consciousness. So you assume that consciousness requires something extra --- something in addition to the brain activity of the person having the conscious experience. Is this something extra physical? Is it non-physical --- a spiritual substance of some sort? What causal role does it serve in generating the content of consciousness? If it does serve a causal role, how does it do so? What is your principled reason for claiming that consciousness can not simply be brain states?

You haven't yet replied to my question earlier in this thread: "What do you think guides the brush of the artist?"   


DA: "We are offered all kinds of weak synonyms like "feeling", "experience", "phenomenal" this, that and the other" etc etc. Or else we get feeble formulae like Nagel's that consciousness is "to be like" something (or whatever it is exactly). None of this helps one iota. It is simply exchanging one term for another."

Given the ambiguity of language, you have a point here. That's why it helps to point to the structural and dynamic details of a model brain mechanism as an exemplar of phenomenal content.

.. AT



























2009-06-01
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "So you assume that consciousness requires something extra --- something in addition to the brain activity of the person having the conscious experience. Is this something extra physical? Is it non-physical --- a spiritual substance of some sort? What causal role does it serve in generating the content of consciousness? If it does serve a causal role, how does it do so? What is your principled reason for claiming that consciousness can not simply be brain states?"

I have absolutely no idea what the nature of the "something extra" is; but my (and everyone's) ignorance about this is certainly not evidence that there is not "something extra". (There are some issues like this where a healthy, honest agnosticism is not a bad thing.)

As for its 'casual role in generating the content of consciousness', I don't understand the question. The something else is (what we call) consciousness. 

My 'principled reason' for claiming that 'consciousness can not simply be brain states' is simply that it makes no sense to say this.  As we saw in our recent exchange, one ends up having to claim things like: consciousness is (to borrow you words) "neurons, glia, and other biological stuff." Whatever it is, consciousness is surely a kind of (human) experience. 'Neurons, glia, and other biological stuff' are not an experience; they are things.

AT: "You haven't yet replied to my question earlier in this thread: "What do you think guides the brush of the artist?"  

This is a very interesting (and much neglected) question in the philosophy of art, but not one that would throw any light on our present discussions.

DA

2009-06-02
The 'Explanatory Gap'
Reply to Derek Allan
DA: "My 'principled reason' for claiming that 'consciousness can not simply be brain states' is simply that it makes no sense to say this.  As we saw in our recent exchange, one ends up having to claim things like: consciousness is (to borrow you words) "neurons, glia, and other biological stuff." Whatever it is, consciousness is surely a kind of (human) experience. 'Neurons, glia, and other biological stuff' are not an experience; they are things."

Your argument is not reasonable. For example, transistors, capacitors, resistors, and inductors are just "things". But when they are organized into the right kind of mechanism they constitute a radio receiver, something that realizes a property that the separate components cannot realize. Just so in the case of your own brain. My assertion is that the neurons in your brain are organized into a biophysical mechanism that constitutes the essential part of you as a conscious, pondering philosopher.

I might add as a correction to your comment above, I didn't say that "consciousness is neurons, glia, and other biological stuff". I said that when we look into the brain of a conscious patient we see neurons, glia, and other biological stuff. Clearly, there is a big difference between what I said and what you say I said.


AT: "You haven't yet replied to my question earlier in this thread: "What do you think guides the brush of the artist?"   

DA: "This is a very interesting (and much neglected) question in the philosophy of art, but not one that would throw any light on our present discussions."
 

Are you really prepared to claim that the unique consciousness of the artist does not have an important causal relationship to what he puts on canvas?

.. AT 



2009-06-02
The 'Explanatory Gap'
Reply to Derek Allan
RF: "For what it's worth, here's my tuppence-worth: consciousness is nothing more nor less than a point of view...."
DA: But this surely won't do, will it? To begin with, it would not allow us to distinguish between animal and human consciousness (assuming we can sensibly apply the same term to both). A cat stalking a bird has a 'point of view' (I imagine).
As has already been said in this thread, the philosophical problems of consciousness concern what any and every instance of it have in common. The so-called hard problem has nothing to do with the contents of consciousness, what any conscious entity might or might not be conscious of. A thing is considered to be conscious if it is the subject of experience, but the nature of that experience is irrelevant in this particular context.
DA: Second, having a 'point of view' is surely just one of the possible experiences of a conscious person - along with not having a point of view, thinking about something, not thinking about anything in particular, feeling passionate about something, feeling apathetic, looking forward to tomorrow, not caring if tomorrow never comes, and so on. These are among the umpteen things a conscious person can do, but none of them tells us what (human) consciousness is.
You seem to be interpreting "point of view" as "opinion". I'd have thought what I said there, especially concerning geographical location, would have made it obvious that's not what I meant, but obviously not. What all these things you mention have in common is "aboutness", or intentionality (my view on that is quite close to Dennett, 1987). My "point of view" concept, which is very broad, is intended to encompass intentionality. Subjectivity, consciousness and intentionality are extremely closely related. Where there is a subject, there is a point of view, but every geographical location in the universe is a potential view point, and what distinguishes what we would consider a subject, at a particular time and place, from the case in which that particular time and place is empty, is just the physiological and psychological characteristics of the relevant entity. In other words, nothing metaphysical.

Of course, there is a very great deal more to it than that. I've written several tens of thousands of words, though I've nothing in print as yet. Maybe that's why I'm trying, despite all the evidence of futility, to explain it in an Internet forum!

Daniel Dennett, The Intentional Stance, MIT Press, 1987 (from memory).

2009-06-02
The 'Explanatory Gap'
Reply to Derek Allan
"My 'principled reason' for claiming that 'consciousness can not simply be brain states' is simply that it makes no sense to say this.  As we saw in our recent exchange, one ends up having to claim things like: consciousness is (to borrow you words) "neurons, glia, and other biological stuff." Whatever it is, consciousness is surely a kind of (human) experience. 'Neurons, glia, and other biological stuff' are not an experience; they are things"..DA

I think  that is your error. Anything that has a head and a body including a worm or an insect has some form of consciousness.  You can argue all you want but that's a common sense insight. What is distictively human is that no other form of life talks about their consciousness or they don't concoct an explanatory gap or explanatory trap as I see it.  If consciousness can be described as simply as how a living being absorbs its environment, then the real mystery is human language and not human consciousness or how human language forms this problem.

As far as dualism, Einstein proved that matter does not come out of nowhere, including the matter that makes us.
It's all about understanding structure or as I (an engineer) sees it. 

2009-06-02
The 'Explanatory Gap'
Reply to Derek Allan
Derek,

You say: "My 'principled reason' for claiming that 'consciousness can not simply be brain states' is simply that it makes no sense to say this."

From what I gather, your view is that it makes no sense to say anything at all about consciousness, except that we don't know anything about it.  So it should make just as little sense to you to say "consciousness is not brain states" as to say "consciousness is brain states."

As I think I told you already, I share your concern about the how the term "consciousness" is sometimes presumed to have a clear meaning in philosophical discussions.  The term "consciousness" has many uses in our language, and the fact that we understand ourselves as conscious beings is an integral part of who we are; but this does not mean that "being conscious" means anything in particular, or that the term "consciousness" must refer to a particular entity or process. 

We should not betray how the language is used in normal (i.e., non-philosophical) life--not only the word "consciousness," but a family of related terms, such as "feelings" and "experiences."  To understand consciousness, we have to understand how people use the language of consciousness--because an explanation of anything else would not be an explanation of consciousness.

For example, if our language of consciousness is clear on anything, it is clear here:  that we talk of a person being conscious when they are awake, and not when they are asleep.  Neuroscientists have made headway in understanding how sleep-wakefulness cycles are regulated.  The science of consciousness is already advancing.

There are countless other demonstrable relationships between neurological processes and consciousness.  At the simplest extreme, when you close your eyes, you no longer experience vision.  The more we understand the brain, the more we understand the behavior we associate with consciousness; that is, the behavior which motivates our use of the language of consciousness.

I know you are skeptical of a behavioral approach here, but I hope you will consider subjecting your prejudice to further scrutiny.  To say that consciousness cannot be understood in terms of behavior is to say that the language of consciousness cannot be learned; which is to say that all talk of consciousness is meaningless.  But if that were the case, the problem would not be that consciousness is outside of our scientific grasp; rather, it would be that we were all deluded in thinking that there is anything to grasp for.  The everyday language of consciousness would be a widespread error, and nothing more.  (This is not a view I encourage, but it follows logically from an anti-behavioral stance.)

As I understand it, there are no a priori limitations on scientific discovery.  We cannot use philosophy to describe something which exists but which scientists cannot understand.  So there is no basis for postulating "something else" here, if by that it is meant something beyond the realm of scientific discoverability.  Perhaps consciousness is not all about neurological function; but that should be a scientific hypothesis, not a philosophical presupposition.

2009-06-03
The 'Explanatory Gap'
RF: "The so-called hard problem has nothing to do with the contents of consciousness,.."

I'm not, as far as I am aware, talking about the 'contents of consciousness'. (I'm not even sure what the phrase means.)  How do you get this out of what I said?

I'm also not sure what geographical location has to do with the issue.  Consciousness is surely about much more than what the world looks like from where I am situated. ( I suspect I am missing your point.)

DA

2009-06-03
The 'Explanatory Gap'
Reply to Arnold Trehub
AT:  For example, transistors, capacitors, resistors, and inductors are just "things". But when they are organized into the right kind of mechanism they constitute a radio receiver, something that realizes a property that the separate components cannot realize.

But I would have thought this supports my argument rather than yours. The things in this case generate something of a quite different nature from those things.

But of course the parallel is only a very rough one anyway. We know what the new something is in the case of a radio. We don't know that in the case of consciousness. Also we can't just assume that the brain generates (or 'realizes') whatever consciousness is. We have no idea what the relationship is between brain and consciousness (a problem not helped by not even knowing what consciousness is).

AT: I said that when we look into the brain of a conscious patient we see neurons, glia, and other biological stuff.

Correction noted. But it doesn't alter my point: consciousness is not an observable thing.

AT: Are you really prepared to claim that the unique consciousness of the artist does not have an important causal relationship to what he puts on canvas?

Not at all. But there is nothing about artistic creativity as such that throws any special light on our present topic - any more than discussing how a good fisherman operates or a good hunter hunts would. The issue is about a general human capacity.

DA


2009-06-03
The 'Explanatory Gap'
VP : "Anything that has a head and a body including a worm or an insect has some form of consciousness."

Interesting definition.  I am not enough of a biologist to know exactly when it becomes impossible to distinguish a head from a body. But I guess this would exclude jellyfish, octopus (maybe?),  And of course all sorts of amoeba etc. Personally I would tend to exclude worms too. Snakes would pass, I guess. What about starfish?   All very tricky...

DA

2009-06-04
The 'Explanatory Gap'
Hi Jason

I wrote a reply to you last night and then the site went down and I lost the lot. Here goes again.

JS: "From what I gather, your view is that it makes no sense to say anything at all about consciousness, except that we don't know anything about it.'

That is not quite my position. I have actually said one or two things (!) I've said it is not just neurons etc. I've even been rash enough to say it is a form of experience - though that comment is embarrassingly inadequate. I do think that consciousnesses is far more complex than a lot of the philosophical discussion I see seems to assume.  For instance, the example of 'seeing the colour red' which is used so often strikes me as rather like talking about the power of a supercomputer by saying it can add 2 and 2. (This is just a metaphor by the way. It is not a suggestion that consciousness is like a supercomputer.)

JS: "we have to understand how people use the language of consciousness--because an explanation of anything else would not be an explanation of consciousness."

I thought 'ordinary language' philosophy had gone out of fashion?  But in any case it is of very limited use here. Take your next comment that "it is clear here: that we talk of a person being conscious when they are awake, and not when they are asleep.'  I think this is a red herring.  States of being asleep or in a coma (the 2nd is the better example) are simply states where consciousness is not there. They tell us absolutely nothing about what a (waking) consciousnesses is.

JS: " Perhaps consciousness is not all about neurological function; but that should be a scientific hypothesis, not a philosophical presupposition."

Why so? If consciousness is not scientifically explicable, (which it may well not be) why should it be the exclusive preserve of science?  By the way, I am not against scientific research into the brain. (It will continue anyway because there are good medical reasons for it.)  My main concern is that if philosophy wants to talk about consciousness (and why not?) then it should at least begin with a solid attempt to think through what consciousness might be. So far, my impression (from things like the Nagel damp squib) is that there is an awful long way to go just doing that much.

DA

2009-06-04
The 'Explanatory Gap'
Reply to Derek Allan
RF: "The so-called hard problem has nothing to do with the contents of consciousness,.."
DA: I'm not, as far as I am aware, talking about the 'contents of consciousness'. (I'm not even sure what the phrase means.)  How do you get this out of what I said?
I realised after sending that, but too late to edit it, that I'd made the mistake of assuming that you had conscious content in mind when you brought up the distinction between human consciousness and that of other species. As for what it means, it's simply whatever one is conscious of; for instance, as I write, these words on the screen, what I intend to convey by them, and (alternating with that) my guesses as to how they might be read.

DA: I'm also not sure what geographical location has to do with the issue.  Consciousness is surely about much more than what the world looks like from where I am situated. ( I suspect I am missing your point.)
Indeed. A crucial concept here is intentionality, or "aboutness", which Brentano called "the mark of the mental": every mental phenomenon, such as awareness, desire, fear, etc, etc, is about something, even if that thing is illusory or fictional, whereas no physical phenomenon is about anything. Dennett is probably the most prominent contemporary exponent of intentionality, with his concept of "the intentional stance". If your interest in philosophy of mind is serious, you definitely need to understand intentionality, and in my opinion you need to study Dennett. It might well be the case that an understanding of the intentional stance is required to appreciate what I'm saying in connection with geographical location. See the reference in my previous message, and, of course, Google.
... (expand)

2009-06-05
The 'Explanatory Gap'
RF: "Indeed. A crucial concept here is intentionality, or "aboutness", which Brentano called "the mark of the mental": every mental phenomenon, such as awareness, desire, fear, etc, etc, is about something, even if that thing is illusory or fictional, whereas no physical phenomenon is about anything."

Ah yes, "aboutness". This pops up from time to time in the philosophy of art. Arthur Danto is one of its chief exponents there. But that is to digress.

I don't think talk about intentionality, or "aboutness" helps very much re the question of what consciousness is. The problem is that things too easily become circular. Question: 'What is consciousness?"  Answer: "A state of being 'about' something."  Next question: "But this phenomenon (consciousness) could hardly be 'about' something if it wasn't conscious, could it?" (and you yourself say that "no physical phenomenon is about anything".)

Basically, we are back in the land of what I've called 'weak synonyms' for consciousness (feeling, experience, etc), except here we use an invented noun derived from a preposition.

Re Dennett: I tried to read his "Consciousness Explained" (a smidgen of hubris in that title, perhaps?). I found he was 'explaining' all the things I thought peripheral, and addressing none that I thought important. In particular, he steered well away from venturing a clear 'explanation' of what he thought consciousness is.  Since it was precisely that that he claimed to be explaining,I soon lost interest....

DA

2009-06-05
The 'Explanatory Gap'
Reply to Derek Allan
Derek,

You've said it's "not just neurons" and, more generally, you've expressed a strong skepticism towards the idea that we could ever scientifically grasp consciousness.  But such statements do not seem to combine well with your insistence that, before we make any philosophical judgments about consciousness, we must first try to figure out what it is.  You are making strong judgments about what consciousness is not, and also about probable limitations on our ability to learn about consciousness--all the while criticizing anybody who presumes to know enough about consciousness to make any judgments at all about it.  Am I wrong in noticing an inconsistency here?

You say, "I thought 'ordinary language' philosophy had gone out of fashion?" 

I don't know.  In any case, I was not advocating "ordinary language" philosophy, exactly.  But I wouldn't disparage it, either.  Philosophy involves ordinary languages, specialized languages, and varieties of formal logic.  I wouldn't exclude any of these from our collective toolbox.

My point was rather that the notion of consciousness (and related notions, such as intentionality, feelings, and experience) has its roots in ordinary language, and it is because of these roots that we are able to talk about notions like "the problem of consciousness" or "the mind/body problem."  The philosophical issues here come out of ordinary language, and they can only be resolved by making sense of our ordinary language.  This, I believe, can only be achieved by understanding the behavior which motivates that language.  However, this does not mean that our analysis must be limited to ordinary language.

I think you misunderstood the example of sleep-wakefulness cycles here.  The point was not that, by understanding the neurological aspects of sleep-wakefulness cycles, scientists have discovered something like the "true nature" of consciousness.  Rather, they have made some progress towards understanding the behavior which motivates our talk of consciousness.    Part of what it means to be conscious is to be awake, alert and active in the world.  By beginning to understand the neurological aspects of how wakefulness is regulated, we have come a step towards understanding the neurological functionality of consciousness.  This is not a step towards telling us "what consciousness is," but rather progress towards understanding how consciousness happens.

You ask, "Why so? If consciousness is not scientifically explicable, (which it may well not be) why should it be the exclusive preserve of science?"

The issue here relates to how we frame the problem.  You can define some aspect of human behavior out of observable existence (and so frame the problem in question-begging terms which defy objective analysis).  That would define it out of scientific discoverability; but what avenues of inquiry would it leave open?

It is possible that there are limitations on what science can do in the study of consciousness--it may be that human behavior is too complex for any feasible science to penetrate completely.  But that is not something we should assume.  So what basis is there for excluding some aspects of human behavior from scientific scrutiny?

I think the burden is on those who criticize science to give a reason why science has some limitations here.  The reason why I don't think this can be done is because such a reason would have to be analyzable objectively, and without appeal to absolutely subjective knowledge.


2009-06-05
The 'Explanatory Gap'
Reply to Derek Allan
AT:  "For example, transistors, capacitors, resistors, and inductors are just "things". But when they are organized into the right kind of mechanism they constitute a radio receiver, something that realizes a property that the separate components cannot realize."

DA: "But I would have thought this supports my argument rather than yours. The things in this case generate something of a quite different nature from those things.

It seems to me that you miss the point here. What I'm trying to convey is the idea that separate brain cells (things) do not individually realize consciousness, rather it is the collective activity of such cells when they are organized into the right kind of brain mechanism (e.g., the retinoid mechanism) that realizes the property of consciousness. Put crudely, the idea is that some kinds of dumb stuff when properly organized can become smart stuff.


DA: "Also we can't just assume that the brain generates (or 'realizes') whatever consciousness is. [1] We have no idea what the relationship is between brain and consciousness [2] (a problem not helped by not even knowing what consciousness is)."

[1] You are simply ignoring an overwhelming body of evidence supporting the idea that consciousness is realized by some particular kind of brain activity.

[2] I've argued that consciousness is a transparent representation of the world from a privileged egocentric perspective. I'd be interested in your principled argument against this definition of consciousness.


DA: "But there is nothing about artistic creativity as such that throws any special light on our present topic - any more than discussing how a good fisherman operates or a good hunter hunts would. The issue is about a general human capacity.

I disagree. If we want a better understanding of consciousness we must refer to specific instances of its content and relate these to the human condition. An analysis of the creative "vision" of the artist can illuminate the role that consciousness must play in a peculiarly human activity. We can then address the question  of how such a vision might be generated.


.. AT












2009-06-05
The 'Explanatory Gap'
JS: You've said it's "not just neurons" and, more generally, you've expressed a strong skepticism towards the idea that we could ever scientifically grasp consciousness.  But such statements do not seem to combine well with your insistence that, before we make any philosophical judgments about consciousness, we must first try to figure out what it is.  You are making strong judgments about what consciousness is not, and also about probable limitations on our ability to learn about consciousness--all the while criticizing anybody who presumes to know enough about consciousness to make any judgments at all about it.  Am I wrong in noticing an inconsistency here?

To some extent I am being inconsistent since I cannot say what consciousness is either. But whatever it is, human consciousness is ultimately what makes us human is it not? So it is what enables all our emotions from the most mean and petty - or even suicidal - to the most profound or exalted; it is what makes possible the most elementary calculation (say 2+2) and the most complex and obscure theorizing; it is what give us a sense of the passing of time and the knowledge - which we presume no other animal shares - that we will die one day. And lots more besides. I don't know how to describe the ... quality (I'm searching for words) that makes all that possible but I do know that none of the formulations I've encountered so far ("its all neurons", it's 'experience', 'it's aboutness', its being 'like' something...) etc even begin to explain any of this.  They are light years away from it.  So while I cannot say what consciousness is, I don't have much trouble recognizing what it is not.
  
JS: "My point was rather that the notion of consciousness (and related notions, such as intentionality, feelings, and experience) has its roots in ordinary language, and it is because of these roots that we are able to talk about notions like "the problem of consciousness" or "the mind/body problem."  The philosophical issues here come out of ordinary language, and they can only be resolved by making sense of our ordinary language.  This, I believe, can only be achieved by understanding the behavior which motivates that language.  However, this does not mean that our analysis must be limited to ordinary language."

But presumably, if one is an advocate of this approach to philosophy (which I am definitely not), it would not be consciousness alone - or even especially - that would be amenable to it? Presumably any philosophical problem would be?

JS "This is not a step towards telling us "what consciousness is," but rather progress towards understanding how consciousness happens."

But how could one do one without the other - how can one say how X happens if one doesn't know what X is? (and consciousness is not simply being "awake, alert and active in the world". Being awake is merely the condition that makes consciousness possible - just as being dead is the condition that makes it impossible. It should not be confused with consciousness itself. Ordinary language leads us astray here (as it often does). We say "he is conscious" meaning he has recovered from the blow to the head. But that is hardly what we are talking about.)

JS: " So what basis is there for excluding some aspects of human behavior from scientific scrutiny?"

But as I said I do not want to exclude it.  All power to it. But let's not start with the presupposition that it must be the only - or even the best - way. The history of the scientific study of human behavior is not very encouraging, I should add. One has only to think of the dead end called behaviorism - which to my mind the current neuroscientific stuff in philosophy is, in the end, only a reincarnation of. (Excuse the contorted grammar!)

DA

2009-06-05
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "it is the collective activity of such cells when they are organized into the right kind of brain mechanism (e.g., the retinoid mechanism) that realizes the property of consciousness"

'Realizes'?  What does that mean?    (Ditto for "You are simply ignoring an overwhelming body of evidence supporting the idea that consciousness is realized by some particular kind of brain activity.) 

 AT: "I've argued that consciousness is a transparent representation of the world from a privileged egocentric perspective. I'd be interested in your principled argument against this definition of consciousness."

I have so many questions about what you mean by it, I couldn't possibly argue against it.

AT: "I disagree. If we want a better understanding of consciousness we must refer to specific instances of its content and relate these to the human condition. An analysis of the creative "vision" of the artist can illuminate the role that consciousness must play in a peculiarly human activity. We can then address the question  of how such a vision might be generated."

Again too many words to sort out (eg ' specific instances of its content' ,'human condition', 'vision'...)

DA

2009-06-05
The 'Explanatory Gap'
Reply to Derek Allan
DA: I don't think talk about intentionality, or "aboutness" helps very much re the question of what consciousness is. The problem is that things too easily become circular. Question: 'What is consciousness?"  Answer: "A state of being 'about' something."  Next question: "But this phenomenon (consciousness) could hardly be 'about' something if it wasn't conscious, could it?" (and you yourself say that "no physical phenomenon is about anything".)
No, that's not true. This message is about something (I hope), as are all communications. Some philosophers get around this by saying that messages and such have merely derived intentionality, while mental states are intrinsically intentional, but I've yet to see a convincing explanation of the difference between intrinsic and derived intentionality. (I believe I have one, but (a) I'm hoping to publish it eventually, (b) it's too involved to explain in a forum post, and (c) I have no doubt, from what you've said in this thread, that you'd find it either wholly unacceptable or totally incomprehensible.)

DA: Re Dennett: I tried to read his "Consciousness Explained" (a smidgen of hubris in that title, perhaps?). I found he was 'explaining' all the things I thought peripheral, and addressing none that I thought important. In particular, he steered well away from venturing a clear 'explanation' of what he thought consciousness is.  Since it was precisely that that he claimed to be explaining,I soon lost interest....
In my opinion, and, I think it's safe to say, in that of Dennett, a clear and concise explanation of what consciousness is, is impossible. For one thing, the philosophy is too slippery: different people have different ideas both about exactly what needs to be explained, and what would constitute a good explanation. And the psychology is too complicated, because what appears to be a simple quality (a thing either is conscious or it is not) is actually a composite of many factors. And unless you're willing to accept that might be the case, you're not sufficiently motivated to put in the effort that's required to understand what he's saying. In particular, if you think consciousness is "what makes us special", then you're unlikely to be receptive to any explanation that threatens to dissipate that aura.

2009-06-05
The 'Explanatory Gap'
Reply to Derek Allan
Derek,

You ask, "whatever it is, human consciousness is ultimately what makes us human is it not?"

I will return with a different question:  Why must "being human" amount to one thing?

I'm not willing to assume that "being human" is a clearly defined concept.  On what basis do you claim that there is one thing, called "consciousness," which is the basis for all our emotions, our mathematical calculations, our awareness of time, and so on?  Don't you agree that one of the problems here is that the term "consciousness" is being used too irresponsibly, and that we shouldn't make these kinds of assumptions?

If we want to understand what it means to be a human being, we have to look at human behavior, and here we have a variety of scientific perspectives at our disposal, including neuroscience and biology.  We may eventually have more.  But through all of this, we are talking about observable behavior.  What else do you propose we look at?  What else could we look at?  Why think that there is anything else we could possibly consider?

And I disagree with you when you say neuroscience has not begun to explain consciousness.  For example, we have learned a lot about brain systems involved in some emotions, and about how visual information is processed in the brain.  You might say we have barely made a start, considering how much we do not yet know; but it is a start nonetheless, and there is no reason to think we are going up a blind alley.

You ask, "But presumably, if one is an advocate of this approach to philosophy (which I am definitely not), it would not be consciousness alone - or even especially - that would be amenable to it? Presumably any philosophical problem would be?"

Yes, it seems plausible to me that all philosophical problems are rooted in ordinary language, and that their resolution depends upon properly understanding the relationships between language and behavior.

JS: "This is not a step towards telling us "what consciousness is," but rather progress towards understanding how consciousness happens."

DA:  "But how could one do one without the other - how can one say how X happens if one doesn't know what X is?"

The point is that in actual cases it is obvious what we mean when we say "that person is conscious."  In some cases, for example, we mean, "that person is awake and alert."

You suggest that, in presenting this example, I am allowing ordinary language to lead me astray.  I profoundly disagree.  This is the whole point: that what we mean by "consciousness" is not just one thing.  If you accuse me of going astray every time I attempt to interpret the language directly, in relation to how the terms are actually used--that is, in relation to human behavior--then the term "consciousness" will remain mystified and impenetrable.  This will not advance our discourse, nor will it preserve any sort of philosophical truth.  It will merely prevent us from furthering our mutual understanding.

You say, "let's not start with the presupposition that [science] must be the only - or even the best - way."

Your view of science is probably quite different than mine.  I would not say that I am presupposing that science is the best, or the only "way" to discovery.  Rather, in my understanding, science is the formalization of discovery.  When we develop methodical procedures for the discovery of some phenomenon, we call it "science."  It is not a question of whether or not science is the only way, or even the best way.  It is a question of whether you want to define "consciousness" to be something which can be discovered or if you want to define it as something which must remain beyond comprehension.  And if you choose the latter, I cannot see any hope for our discourse.

2009-06-06
The 'Explanatory Gap'
JS: "On what basis do you claim that there is one thing, called "consciousness," which is the basis for all our emotions, our mathematical calculations, our awareness of time, and so on?  Don't you agree that one of the problems here is that the term "consciousness" is being used too irresponsibly, and that we shouldn't make these kinds of assumptions?"

If human consciousness is not at the root of all this, what could be?  Animals don't seem to be aware they are going to die one day, to be able to add 2 and 2, to sometimes want to commit suicide (lemmings aside...), to worship gods, etc etc.  If these kinds of things, and the others I mentioned, are not somehow the consequence of human consciousness, what function or meaning could the idea of human consciousness even have? Perhaps we should scrap the idea altogether? What is the point of it, after all?

JS: "If we want to understand what it means to be a human being, we have to look at human behavior,"

So, yes, let's dismiss thoughts, feelings, the notion of understanding - indeed anything at all that can't be 'observed'. This is Skinner pure and simple... Though you are not alone. Skinner is far from dead in contemporary (analytic) philosophy. He hangs around like Banquo's ghost.

JS: "The point is that in actual cases it is obvious what we mean when we say "that person is conscious."  In some cases, for example, we mean, "that person is awake and alert."

Exactly. But when we try to understand what human consciousness is, do we mean nothing more than 'being awake and alert'. I certainly don't.  If that were all that's involved I wouldn't give the topic the time of day. (And remember even a cat can be 'awake and alert')
 
JS: "It is a question of whether you want to define "consciousness" to be something which can be discovered or if you want to define it as something which must remain beyond comprehension.

But you're suggesting that science is the only way of discovering something. If you really think that, why would you have any use at all for philosophy?  Or do you think that philosophy is nothing more than science? (There again you would not be alone...)

DA

2009-06-06
The 'Explanatory Gap'
Reply to Derek Allan
AT: "it is the collective activity of such cells when they are organized into the right kind of brain mechanism (e.g., the retinoid mechanism) that realizes the property of consciousness"

DA: "'Realizes'?  What does that mean?"

In this context, "realizes" means "brings into existence".


 AT: "I've argued that consciousness is a transparent representation of the world from a privileged egocentric perspective. I'd be interested in your principled argument against this definition of consciousness."

DA: "I have so many questions about what you mean by it, I couldn't possibly argue against it."


Here is a concrete example of what I mean. Suppose you are in a museum looking at Canalleto's painting of The Grand Canal in Venice. In this case, you perceive a painting at some distance in front of you as it appears to you from your privileged egocentric perspective because nobody else can see it from your point of view. It is a part of your unique experience of the world at that moment. You're aware of the painting of the scene as being only a representation of a scene in Venice, not the real thing. At the same time, the painting itself and everything around you is perceived by you as the real world out there. Your awareness of the content of the painting as a representation is an example of an opaque representation. However, your awareness of the painting itself and everything around you is called a transparent representation because you are not aware of it as a representation; i.e., it's like seeing through your brain's representation of your surroundings as if the representation in your brain were transparent --- here, transparent is a metaphor. Would this capture an essential part of your experience while looking at a painting in a museum?

.. AT








2009-06-06
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "In this context, "realizes" means "brings into existence".

So, the neurons moving around somehow cause thoughts, feeling etc - makes them 'come into existence'?  I wonder what causes the neurons themselves to move?  Do they just somehow spring into action of their own accord, willy-nilly, making me feel sad or happy at random?  How does this work exactly?

AT: Would this capture an essential part of your experience while looking at a painting in a museum?

No. I don't think art is essentially representation. (This line of discussion will not lead us anywhere. We would be confusing two areas of philosophy.)

DA


2009-06-06
The 'Explanatory Gap'
RF: "This message is about something (I hope), as are all communications."

The message, yes. But that doesn't  necessarily tell us anything about consciousness.

RF: "In particular, if you think consciousness is "what makes us special", then you're unlikely to be receptive to any explanation that threatens to dissipate that aura."

I'm not sure I said it's what makes us "special". It's also, presumably, what makes us the terrors of the earth - often self-centred, consumed with greed, assassins, etc.

But leaving that aside, why in your view (and others might like to comment on this too) would we want to study consciousness in the first place - I mean, at all. It can't surely be just because it's an example of neurons doing their thing.  Our most insignificant physical actions no doubt cause that too. Why single out consciousness? Why bring the big artillery of philosophy to bear on that?  Why bother, after all ... ?

DA

2009-06-06
The 'Explanatory Gap'
Reply to Derek Allan
DA:  "Perhaps we should scrap the idea altogether? What is the point of it, after all?"

I think the term has value and function.  Part of that function is to refer to being awake and alert.  But that does not mean that, when we try to understand consciousness, we are only trying to understand being awake and alert.

DA:  "yes, let's dismiss thoughts, feelings, the notion of understanding - indeed anything at all that can't be 'observed'."

Thoughts, feelings, and ideas can all be observed.  The fact that we are responding to each other's ideas indicates that we can observe them.  If thinking and understanding were not observable, our teachers would have a hard time judging their students' progress.  If feelings couldn't be observed, husbands and wives would never know when to say "I'm sorry."

DA:  "But you're suggesting that science is the only way of discovering something."

Not at all.  I'm suggesting that "science" is a word we use to refer to whatever methodical processes lead to discovery.  The word "science" does not refer to a specific way of discovering anything.

If you think the word "science" refers to a specific way of discovering something, then what would that be?


2009-06-06
The 'Explanatory Gap'
Reply to Derek Allan
RF: "This message is about something (I hope), as are all communications."
DA: The message, yes. But that doesn't  necessarily tell us anything about consciousness.
But you've removed the context! That was intended to illustrate the difference between intrinsic and derived intentionality, which in my view can tell us a great deal about consciousness, and I honestly believe you would benefit from looking into it, given your obvious interest in the area.

RF: "In particular, if you think consciousness is "what makes us special", then you're unlikely to be receptive to any explanation that threatens to dissipate that aura."
DA: I'm not sure I said it's what makes us "special". It's also, presumably, what makes us the terrors of the earth - often self-centred, consumed with greed, assassins, etc.
. (expand)
Either way, it's value-laden, isn't it? The fear is that, by treating "Mind As Machine" (the title of Margaret Boden's excellent history of cognitive science), we reduce people to mere mechanism and thus eliminate all value, responsibility, love, art, etc. (As well as self-centeredness, greed, crime and so on.) But values aren't facts. The mistake is to assume that, to preserve values, we have to find factual foundations for them. That puts too high a value on objectivity, as well as arguably being a category error. A subjective, or rather intersubjective, foundation for values is perfectly valid, and there's no need to fear an objective analysis of consciousness.

DA: But leaving that aside, why in your view (and others might like to comment on this too) would we want to study consciousness in the first place - I mean, at all. It can't surely be just because it's an example of neurons doing their thing.  Our most insignificant physical actions no doubt cause that too. Why single out consciousness? Why bring the big artillery of philosophy to bear on that?  Why bother, after all ... ?
Isn't it obvious? To understand ourselves, of course!

As an undergraduate and for quite a few years thereafter I, like you, wrapped up all human values in the concept of consciousness, which I saw as threatened by the likes of Daniel Dennett, and I don't think it's too strong to say he was my ideological archenemy. I've now come around to the view that, in strictly objective terms, there is no such thing as consciousness, or free will, but that's OK, because subjective and intersubjective ways of thinking and acting are in general terms just as valid as objective ones, and in some contexts (notably social ones) they are actually superior. Of course, many situations in which an emotional response is appropriate require the deployment of intelligence too. Ideally, these work hand-in-hand, but that's no excuse to confuse facts with values.

2009-06-06
The 'Explanatory Gap'

THE CARTESIAN BOTTOM LINE ON SCEPTICISM


JS: "Your [feeling/functing] distinction begs the question against a functional explanation of feelings."

Actually, it just begs an answer -- but the answer is not forthcoming. It just keeps being asserted that either there is nothing to explain, or it has already been explained.

JS: "The language of feeling is a way of speaking about our behavior as being a reaction to internal states, and a way of speaking about our internal states as reactions to external events—... the language implies causality."

Actions and reactions are just actions -- doings, functings -- hence unproblematic. But feelings are not doings; they are feelings.

Nor does the locution "internal states" help (apart from its being equivocal about whether it means internal to the body, unproblematically, or internal to the mind, in other words, felt, which is, again, what it is that we are seeking an explanation for).

Speaking about "feeling" is speaking about felt internal states. I don't use "feeling" when I speak about my atrial fibrillations because I don't feel my atrial fibrillations, even though they too are "internal states."

And, yes, when I withdraw my hand from the flame because it hurts, my language implies that the feeling is causing the withdrawing. Moreover, it feels like I'm withdrawing my hand because of the feeling (pain); it also feels like I'm withdrawing my hand because I felt like it -- in particular because I willed it. 

But all that is begging for a causal explanation of how and why -- not the question-begging assertion that "the language implies causality."

JS: "Inexplicably, you say feelings have no causal role to play.  Whatever you mean by the word “feelings,” then, it is not what is commonly meant by the term."

(1) I mean by "feelings" precisely what everyone means by feelings.

(2) If you disagree that there exists no explanation of how and why we feel, then please draw my attention to the how/why explanation I somehow seem to have missed!

(3) I not only pointed out that no causal explanation of feeling has been provided (3a). I also went on to say that I don't think that a causal explanation can be provided (3b), and why: because there is no room for feelings to have any causal power (no 5th force; telekinesis is false).

JS:  "If feelings have no consequences for anything, then any valid results of our discourse are valid regardless of whether or not feelings exist.  To appeal to feelings—to talk of them at all—is superfluous.  So, not only do I not know what you mean by the term “feelings,” I do not see how you could justify postulating them."

Yes, feelings are not really causal (hence they are superfluous), even though they feel causal. However, feelings do exist. Moreover, they keep feeling causal regardless of whether they are or are not really causal, and regardless of whether or not we can explain how and why they are causal. 

Hence our discourse about feeling is perfectly valid regarding both their existence and what feels like their causal role. But when we go on to say that their causal role is in reality what it feels like it is, then our "language" is making an invalid inference.

You do, of course, know exactly what I mean by feelings; everyone does. I need not "postulate" them because you know as well as I know, and Descartes knew, that they exist. We are talking here about explaining their causal function, and you keep begging the question (even though you don't seem to feel you are begging the question!).

JS: "There’s a feeling, a feeler, and an object/event which the feeling represents.  You are treating the represented object as though it were the feeling.  Feelings do not represent themselves... The fact that feelings represent implies that feelings perform a function.  They do work.  So opposing them to “functing” (or “doing”) does not make sense."

I have not said a word about "representation" (which I consider to be yet another weasel-word in discourse about the feeling/function problem). 

A feeling feels like whatever it feels like. If/when I feel a toothache, and I do have a tooth, and there is something wrong with my tooth, then my feeling is veridically correlated with something in the world; if not, then not. 

So far, that's correlation, not causation. Correlates (feelings and functings) do not need to be causes of one another: they can both be the effects of a third cause (functing).

If you think feelings perform a causal function qua feelings -- rather than as the superfluous effects of the functing that is performing the real causal function, please state clearly how and why. Otherwise what does not make sense is to keep insisting, despite the inability to explain how or why, that feelings really do "do work" (as opposed to just feeling like they do).

JS: "You accused me of “complementing the wrong category” because I argued that “feeling something” is a complemented category.  I was responding to a post in which you repeatedly claimed that 'feeling something' had no complement."

Yes, I'm afraid you keep misunderstanding that point, but as it's my point, I accept full responsibility for making it clear, so I will now have another go:

What I keep saying in my posts is that "feeling a toothache" as complemented by "feeling a headache" (i.e., "feeling a non-toothache," or "not-feeling a toothache") is perfectly well-complemented, and perfectly unproblematic. What is uncomplemented is "feeling any feeling at all" as complemented by "not-feeling any feeling at all" (shorthand: "feeling something" vs. "feeling nothing").

And it is indeed feeling -- the generic category -- that covers all sense modalities, exteroceptive (like seeing, hearing) and interoceptive (like fatigue, anxiety or grief) and all manner of feeling -- that is at issue here. If you pick a specific feeling modality, such as, say, tasting, the complementation problem does not arise: Tasting vanilla ice-cream is complemented by tasting chocolate ice-cream, and "tasting any taste at all"  (i.e., tasting something) is perfectly well complemented by not tasting anything at all (i.e., tasting nothing). (It feels like something to taste nothing at all, just like it feels like something to be blind, i.e., to not see anything at all.) 

But the analogue does not work for feeling itself, for you are always feeling something if you are not obtunded or dead, and it is impossible to feel nothing at all.

(A congenitally blind person is in something like the epistemic situation regarding blindness (apart from hearsay) as the one we are all in regarding feeling (and about what it feels like to be a bat): He has heard that people can see, and that he can't, and he has felt what it is feels like to be unable to see. But as he has never felt what it is like to see, "what it feels like to be blind" is uncomplemented for him -- just as what it feels like to be a bachelor is uncomplemented for me. If an operation one day allowed him to see, he will discover something new not only about what it feels like to see, but about what it feels like to be blind. Only is his vision again disappears will he be in the same sentient situation as the tasting person who momentarily tastes nothing.)

JS: "To repeat: there is no feeling of feeling something, because the category of “feeling something” does not pick out a specific feel.  It represents feelings in general; but it does not feel like what it represents.  It does not represent its own feeling."

I am afraid that argument does not become more persuasive with repetition. Consider:

"There is no feeling of tasting something, because the category of 'tasting something' does not pick out a specific taste."  

I think it's pretty self-evident that that's false, and that you would never make such an assertion in ordinary discourse:

X: "Do you taste something?" 
Y: "I don't understand your question."
X: "Why not?"
Y: "Because you haven't picked out a specific taste. You must ask, for example, 'Do you taste vanilla ice-cream?' Then I would understand the question."

OR

X: "Ladies and gentlemen. I have with me today the subject of the world's first long-term gustatory deprivation experiment. He has had his sense of taste chemically suppressed for a month, and has just tasted something for the first time since his taste has been restored: After all that time, what did it feel like to taste something?"
Y: "I don't understand your question."
X: "Why not?"
Y: "Because you haven't picked out a specific taste...."

And again, we are not talking about "representing" feeling here, but about feeling feeling.

JS: "somebody asks you, “what would it feel like to be a rock?” You could respond... “It wouldn’t feel like anything.  Rocks don’t feel.”  And wouldn’t this... make sense?"

But what on earth would give you the impression that I would say it doesn't make sense? I'm pretty sure rocks don't feel. I know for sure I do feel. I said "feeling" was an uncomplemented category, not an empty one. (What I can't make sense of is why you would even ask me whether this would make sense!)

JS: "How might you answer the question, 'what does it feel like to feel?'” 

That you know perfectly well. (And you do.) Just as you know what it feels like to taste, or what tastes taste like.

JS: "We can use these categories, and doing so even makes sense; but we are making a mistake if we think we are thereby referring to anything."

These categories (plural)? I thought we were only talking about one problem category: "feeling." And I said that it was uncomplemented (hence problematic) but certainly not empty.

---- SH:  “If the only sense-modality were vision, and the only experience were to see shapes, and all shapes were colored -- counting black as a color -- then the subordinate category "red" would be complemented by anything non-red, but the superordinate category "colored" would be uncomplemented.”
JS: "'Colored' would also be complemented by the category 'shaped,' because the same shapes would be recognizable as such despite having different colors."

I think you have again missed my point here. The complement of "colored" is "uncolored," and that category is empty in the hypothetical visual toy-world I concocted. Particular shapes (triangular, square) would be complemented, just as particular colors (red, green) would be; but both uncolored and unshaped would be empty in this world. In our multimodal world we have sounds and smells to complements colors and shapes.

JS: "You say our concept of “feeling something” is established by our knowledge of an invariant feeling present in all feelings.  This requires that all of our feelings are known as particular feelings before we can have the category “feeling something.”  Yet, there is no knowledge of particulars without general categories.  (The notion of a particular is the notion of an instance of a universal.)  The category of “feeling something” cannot come later.

I am not quite sure where these rather abstract regulations are coming from: I can taste this and I can taste that, and I've already got some taste categories. Then I can sample nongustatory feeling, and I've got the category "taste" complemented. But with tasting this and tasting that (all positive instances of "tasting") I already had a sense of what it feels like to taste something -- though it would be a lop-sided sense until I complemented it.

I am not doing individual/universal ontology here. I'm just talking about the phenomenology of feeling and the epistemology of category acquisition: from particular instances to the categories of which they are instances, via the invariant properties that distinguish the members of the category from the members of its complement. (Remove the complement and you are still sampling the members of a category, but a problematic category, because all you have sampled are its members, not its non-members.)

(If we are talking about universals here at all, we are talking about "uncomplemented universals": being uncomplemented extensionally [i.e. in their set of instances: positive only], they are also uncomplemented intensionally [in the {here indeterminate} invariant features that normally differentiate positive and negative instances].)

JS: "...you are wrongly inferring from the sense of "I know what it feels like to be a bachelor" that it must refer to something... some category which was already there ahead of time, just waiting to be revealed... —though, if we wanted to, we could define a referent here.  But in so doing we would be drawing a definition, and not revealing one that was already there."

I don't know about "already there ahead of time, just waiting to be revealed." I just know for sure that there are feelings now (and I'm pretty sure there are fermions now too). But I have no idea whether either feelings or fermions were "always there... waiting to be revealed"...

JS: "'Feeling' is also a family resemblance concept.  We learn the word 'feeling' based on indirect observations—on distinguishing emotional or mental reactions as such."

I couldn't follow that. Every instant of our waking life we are feeling -- and feeling "directly," not "indirectly" (whatever the latter means; the only things I feel at all, I feel "directly"). And our observations are all felt observations. The rest is all about distinguishing this category from that, and that includes distinguishing "feeling this" from "feeling that" -- but not distinguishing "feeling something" from "feeling nothing" (because feeling nothing is an empty category).

The notion of a "family resemblance" category, insofar as I understand it, is the notion of a category that does not have invariants, just lumped disjunctive subsets. I would reply that in those cases where we are indeed capable of reliably assigning membership or nonmembership to all candidates and there is a criterion for correct and incorrect, then we do have a category, and that category must have an invariant (even if it's a long disjunction) -- assuming we are not doing the successful, confirmable category assignment via clairvoyance (which is just as false as telekinesis). 

If we are not capable of reliably assigning membership or nonmembership to all candidates and there is no criterion for correct and incorrect then what we have is not a "family resemblance" category: what we have is no category at all. (Wittgenstein's "private language" argument is valid against the possibility of creating a private language with feeling-categories, because of the impossibility of error, hence error-correction, hence any nonarbitrary criterion for miscategorization: Hence there could simply not be a private language of feeling-categories.)

JS: "We also use the term “feeling” to refer to observation in general, but we have no general criterion for what that means."  

We are talking about felt observation (as opposed to the merely functed kind of "observation" that a surveillance camera connected to an alarm does).

JS: "Why say that, when we observe, all of our observations contain a unique quality, an invariant aspect which is common to all observations?  What would that be?"

The fact that they are felt, rather than just functed, as by a surveillance camera. And what we hear is felt too, rather than just functed, as by an acoustic vibration-detector.

And yes, there is something that seeing this and seeing that and hearing this and hearing that all have in common: they are all felt, rather than just functed.

JS: "You might be tempted to say that the self, the ego, is the invariant entity which all of our observations contain.  But how could we observe our own “I” as an aspect of an observation of something else?"

No, I'm not at all tempted to invoke an ego as the invariant. I doubt that a horseshoe crab has much of an ego, even though he sees. And for all I know, both (1) my feeling of continuous identity across time (which, by the way, sometimes flickers and fades a bit, even when I'm awake) and (2) my memories of "my" past are merely instantaneous illusions, parts of what an instant happens to feel like. (And that's without mentioning the fallibility of any theories I may have about "selfhood" -- my own or anyone else's. That's why I prefer "sentio ergo sentitur" to "cogito ergo [ego] sum" -- if, that is, what is an issue is certainty, rather than just truth, or probability.)

So, no, the only invariant I invoke is the fact that we are feeling, whenever (and whatever) we are feeling.

JS: "Try to observe yourself looking at something—say, a table.  To do this, you might be tempted to say something to yourself, such as, “I am looking at a table.”  But saying is not the same as observing.  So don’t form words in your head.  You might find yourself noticing your body . . . and that helps you remember that you are in the world along with the table.  But it does not show you you-looking-at-the-table.  There is only the observation of the table and your ability to talk about yourself as the observer.  This suggests that the notion of an observer is constructed with language; it is a grammatical convention, a way of speaking (or, as Wittgenstein would say, a way of life.)."

I have a feeling I am being drawn into a side issue that has nothing to do with what I proposed: Whenever and whatever I observe -- be it a table, or me looking at a table, or just "ouch" -- it feels like something, and it is the how/why of that fact, and nothing else, that is at issue here.

And the certain fact that I feel (and the almost-certain fact that a worm does too) has nothing whatsoever to do with language (let alone "grammar," which just refers to the syntactic rules for well-formedness in a formal system).

Feeling is indeed a way (indeed a fact) of life -- but alas an unexplained (and, I think, an unexplainable) one.

JS: "All judgments, even logical and mathematic[al] ones, were doubtable for Descartes...  He explicitly concluded that the very first certainty was the cogito."

I confessed shame-facedly that I am no Descartes scholar (and now I will further confess that I have read little of chapter and verse). And yet I think I can make coherent sense of Descartes. And on my construal, all the stuff about God is transparently irrational nonsense (and I cheerfully accept Descartes' invitation to read between the lines, and infer therefrom that he didn't really mean that irrational nonsense, so opposite is it to the rigorous things he said about certainty). 

The method of doubt makes far more sense if it is based on the usual sceptical argument about the uncertainty (not the falsity) of the reality of the experiential (felt) world of appearances, including science, compared to the certainty (grounded in logical necessity) that NOT (P&NOT-P). 

But what the method of doubt further reveals is that there is, surprisingly, a second kind of certainty, over and above logical necessity: an experiential (felt) certainty, in many ways the diametric opposite of the first, formal certainty, and issuing from the very heart of what is most uncertain, what is most vulnerable to sceptical doubt, namely, whether things are really the way it feels as if they are. And that certainty is the very fact of feeling itself (if/when one is feeling).

Since one cannot plausibly invoke the dangers of the Inquisition to justify feigned scepticism about formally necessary truths in the same way that one can plausibly invoke the dangers of the Inquisition to justify feigned fideism, I can only conclude that Descartes understated the certainty of mathematics either (1) for strategic reasons -- to further reinforce the certainty of feeling -- or (2) because he thought that most people could not hold a proof much longer than NOT (P&NOT-P) in their heads long enough to be certain about it. 

(The only other construal I can think of would be that he was simply wrong on this point, and had not fully thought it through. But I rather doubt that, from all the other evidence of Descrartes' rigor and rationality. But who knows? Newton had his bugaboos too!)

Moreover, I don't think the significance of the 2nd certainty -- that we feel -- is that it provides a rational or methodological basis for science (apart from the fact that every certain truth is welcome in science). I think its significance is in having laid bare the explanatory gap: that feelings exist with certainty, yet we cannot explain how or why.

JS: "I think Descartes’ decision to doubt mathematical judgments was well-considered.  You had to learn mathematics, and it is conceivable that you learned it all incorrectly."  

That's (2) above. But it doesn't cover the face-valid necessity of NOT (P&NOT-P) -- except perhaps for Achilles and the Tortoise

(But no one as obtuse as the Tortoise would be able to apprehend the certainty of the sentio either, though that would not make its truth any less certain, if the Tortoise was indeed feeling. Nor would the tortoise's abtuseness make NOT (P&NOT-P) any less certain. I rather think that at the tortoise's level of abtuseness, it is not just certainty that gets mooted, but truth/falsity, affirmation/denial, and belief/disbelief too! What's certain is that the Tortoise could not even earn his daily lettuce if he were that incoherent (and insouciant). I do sense, though, some conflation or breakdown of the distinction between subjective [i.e., felt] certainty and formal [hence objective] necessity here. Maybe this is what Descartes meant by "conceiving clearly and distinctly"...)

JS: "Even if [the sentio] were true (and I doubt it is), it would not demonstrate that feelings lacked causal efficacy."

That's right. The sentio just establishes the certainty of the existence of feelings. It is the empirical falsity of telekinetic dualism and the empirical nonexistence of a 5th force that seem to entail that feelings are doomed to be noncausal (unless you have an explanation of how and why -- or you have evidence of a telekinetic 5th force)... 

JS: "If you say “I think, therefore I am” to yourself, you are no more convinced of your existence than you had been previously." 

No. But it is drawn to my (momentary) attention, clearly and distinctly, that I cannot doubt the existence of feeling (the way I can doubt so much else that I feel).

JS: "...the I exists only in so far as there is... the kind of thinking which utilizes certain grammatical forms.  The mistake is in thinking that grammatical forms always refer to specific things."

It is not about the existence of the "I" and it is not about grammatical forms. It's about the existence of feelings, irrespective of syntax (be it "sentio ergo sentitur" or "I am doubting I am thinking, but doubting is thinking, hence I cannot doubt that thinking is going on, after all; hence there is thinking.")

JS: "Descartes noticed that the act of thinking “I am not thinking” implied that he was thinking.  And he concluded that only via such action did he exist as such.  Yet, he misinterpreted the nature of the action.  He believed the word “I” had to refer to something, and since everything outside of the act of thinking was dubitable, he postulated himself as “pure consciousness.”  If we wanted to interpret “pure consciousness” here, we might regard it as “thing which uses grammar.”  Any other interpretation could seem extravagant."

I have no idea what "pure consciousness" means, nor what "grammar" has to do with it. The only truth that is free of mystification here is that if you feel something (and we do) then that's one less thing you can be sceptical about. That's all. The rest is about explaining how and why we feel.

JS: "my question remains:  How is your argument for a feeling/functing distinction different from the theistic arguments I mentioned?"

It differs in that the only thing the sentitur entails is that feelings exist (not that whatever you feel exists -- e.g. grass, people, your body, gods, hobgoblins, heaven -- exists). It is a bottom limit on incredulous scepticism, rather than yet another form of fideism. (But apart from that, it just points out a problem -- the unexplained existence of feeling -- not the solution.)

JS: "Descartes believed that the act of thinking “I am not thinking” produces an awareness of a contradiction..." 

So far, this construal presupposes that a contradiction does have certifying force after all (necessary truth on pain of contradiction). [This reinforces my hunch that Descartes did consider NOT (P & NOT-P) certain too.]

JS: "But this is only to say that the utterance of the sentence “I am not thinking” feels contradictory.  And how is the correct interpretation of that feeling established?"

Let me restate it in the form that makes the contradiction more obvious: "Thinking (and feeling) 'I am not feeling' feels contradictory -- and it sure is, if anything is. No interpretational issues at all. Just an understanding of what it means to feel, and what it means to affirm (and deny). (And, of course, being compos mentis.)

The sentitur version is much simpler than that: Because we feel (when we feel), we can be certain feeling exists. Not just certain that "I am feeling now and I am not feeling now" is self-contradictory, but that there is one fact -- and one fact alone -- of felt experience that is not open to sceptical doubt: that it is felt. Sentitur.

JS: "Couldn’t one doubt the feeling of a contradiction?  Couldn’t one doubt one’s grammar?"

One cannot doubt a statement that is necessarily true on pain of contradiction. One can only fail to understand it. But the cogito/sentio is not just a tautology like "If I'm feeling then I'm feeling" which has no more synthetic force than "If I'm flying then I'm flying" -- for there is no way I can know with certainty that I am flying. But I can know with certainty that I am feeling. (That's why I said "I don't know which one of Kant's baroque categories is the right name for it, but the cogito is either a "synthetic a-priori" or an "analytic a-posteriori": it certainly isn't an analytic a-priori (i.e., a tautology)."

JS: "...if you really doubted that you felt anything, a pinch on the cheek wouldn’t prove anything to you."

If you really doubted that you felt anything at all then all that would prove was that, like the tortoise, you had not understood the question (or that you were not compos mentis).

JS: "You think you can doubt that you have a body, but not that you have a mind.  This, I maintain, is impossible."  

Jason, I don't think you have quite understood scepticism. The sceptic does not say it is true that you have no body, just that it is not impossible. Hence you cannot be certain you have no body. (You cannot be certain there are no gods either, or that there is gravity!) But you can be certain that (1) NOT (P & NOT-P) -- and (2) that you are feeling (when you feel). And hence you can be certain that feeling exists (though not how or why!).

JS: "the statement “you do not exist” invites just as much contradiction as “I do not exist,” and saying “you do not have a mind” is just as meaningful as saying “I do not have a mind.” 

To repeat: This is not about truth but about certainty. And it is not about the existence of an "I" but about the existence of a feeling, which, be it ever so fleeting, also entails a "feeler," whatever that means...

-- SH


2009-06-06
The 'Explanatory Gap'

WHAT MAKES "ABOUTNESS" MENTAL



RF: "Some philosophers [say] messages and such have merely derived intentionality, while mental states are intrinsically intentional, but I've yet to see a convincing explanation of the difference between intrinsic and derived intentionality" 
How about this one:

There is no difference between the string of symbols "the cat is on the mat" when it is instantiated in a static book, in a dynamic computer program or in a dynamic (toy) robot. 

In the book and the program, all the meaning ("intentionality," "aboutness")  is in the mind of the interpreter (reader, author, programmer, user), not in the book or the program. (I.e., the meaning of all the symbols and symbol strings is derived, not intrinsic to the book or program.) Indeed, not only is the meaning not intrinsic, it is not even grounded, in the sense that neither the book nor the computer program has the sensorimotor capacity to interact with the things that its symbols are systematically interpretable as being about in a way that is (likewise systematically) congruent with what the symbols are systematically interpretable as being about.

Ditto for a toy robot. In the case of a Turing-Test (TT) scale robot, whose performance capacity in the world of objects and discourse is indistinguishable from that of any of the rest of us (for a lifetime, if need be), the internal symbol strings are indeed grounded in the TT robot's capacity for sensorimotor interaction with what they are systematically interpretable as being about. That takes the external interpreter out of the loop; but that's still just sensorimotor grounding, not intrinsic meaning

The only way meaning becomes intrinsic is if there is something it feels like to be the TT robot, instantiating the symbol string in question.

It is not at all clear how and why there is (or need be) felt meaning rather than just sensorimotor (robotic) grounding (i.e., just functing). That's another variant of the explanatory gap.
RF: "a clear and concise explanation of what consciousness is, is impossible."
No need. It's just feeling, and we all know what that is. What needs explanation is not what feeling is, but how and why it exists. That too is the explanatory gap.
Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346. 
Harnad, S. (1992) There is only one mind body problem. International Journal of Psychology 27(3-4) p. 521 
Harnad, S. (2001) Harnad on Dennett on Chalmers on Consciousness: The Mind/Body Problem is the Feeling/Function Problem. (Unpublished Preprint)
Harnad, S. (2001) Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop&J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press. 
Harnad, S. (2007) From Knowing How To Knowing That: Acquiring Categories By Word of Mouth. Presented at Kaziemierz Naturalized Epistemology Workshop (KNEW), Kaziemierz, Poland, 2 September 2007. 
Harnad, S. and Scherzer, P. (2007) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. In Proceedings of 2007 Fall Symposium on AI and Consciousness, Washington DC. 
Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert&Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer  









2009-06-07
The 'Explanatory Gap'
RF: "Isn't it obvious? To understand ourselves, of course!"

But why consciousness specifically? Presumably studying the operations of the big toe also contributes to understanding ourselves.

RF: "As an undergraduate and for quite a few years thereafter I, like you, wrapped up all human values in the concept of consciousness.."

The question of values is not essential to my argument. I was replying to the suggestion that I thought consciousness made us 'special'.  Since this carries overtones of specially 'valuable' I was simply pointing out that many things human beings do are not specially valuable.  The question of consciousness goes much deeper than questions about right or wrong (though I think this would be part of it).

DA

2009-06-07
The 'Explanatory Gap'
JS:  Thoughts, feelings, and ideas can all be observed.  The fact that we are responding to each other's ideas indicates that we can observe them. 

The fact that we respond to (including misunderstand) each other's ideas, thoughts and feelings is one thing.  Whether we can observe them is quite another.  I personally have never observed a thought, feeling, or idea, and I don't know anyone who has. Nor do I ever expect to meet anyone who has. I have observed what seem to be the consequences of them, many times, but never the things themselves. On occasion, indeed, the consequences do not even seem to accurately reflect the thought, feeling or idea. Sometimes people say they feel sad about something when I have a sneaking suspicion they are in fact quite happy about it.  Of course I may have been wrong...

JS:: Not at all.  I'm suggesting that "science" is a word we use to refer to whatever methodical processes lead to discovery. 

But this is not a normal use of the word science, surely? Science has its own particular methods and procedures (and assumptions). You seem to be suggesting that anything that leads to a discovery is science. Presumably anything that leads to a scientific discovery is science (though some have happened partly by chance) but that is a different matter.

DA


2009-06-07
The 'Explanatory Gap'
Reply to Stevan Harnad
RF: I've yet to see a convincing explanation of the difference between intrinsic and derived intentionality

SH: How about this one... The only way meaning becomes intrinsic is if there is something it feels like to be the TT robot, instantiating the symbol string in question.
I don't consider that an explanation. It describes how these words are used, but it begs the question as to why it feels like something to be some things, in other words why some systems are intrinsically intentional. A proper explanation of the difference between intrinsic and derived intentionality would close the explanatory gap.

RF: a clear and concise explanation of what consciousness is, is impossible.

SH: No need. It's just feeling, and we all know what that is. What needs explanation is not what feeling is, but how and why it exists. That too is the explanatory gap.
Here again, all you're doing is insisting that your particular formulation of the problem is best. But it's brought you up against a stone wall, with no clue as to how to begin to find a way around it. I'd suggest that tells us there's a problem with your formulation.

Here's a suggestion as to where that problem might lie: feelings are reasons for action, not causes of it, and the types of discourse in which these concepts occur are different.

Broadly speaking, allowing for the inevitable imprecision in the use of common words (which I'm guilty of myself), "why" questions are answered by giving reasons, while "how" questions are answered in terms of causes. To look for causal effects of feelings is to commit a category error, like trying to fit a chess piece into a jigsaw puzzle. Reasons are appropriate in social, intersubjective contexts, while causes occur in mechanistic, technical or scientific narratives, i.e. stories about objects, not subjects. In a strictly objective, hard science explanation, there is no place for reasons or feelings, and no need for these concepts. That's why we have the social sciences, and the arts and humanities.

We need to map the boundaries between the disciplines, and the types of narrative, not just blindly wander into the terrority of one while still carrying what is now useless baggage because it belongs to another. Some people insist on asking why the universe exists, looking for a reason as opposed to a scientific, causal account, and to insist that the concept of feelings should fit into such an account is a very similar mistake, just the other way around.

2009-06-07
The 'Explanatory Gap'
Reply to Stevan Harnad
Stevan,

Before I respond to your last post, I wonder if you could answer one question which you seem to have overlooked in your response.  Wouldn’t we say that the brain (or whatever feels feelings) would not be the same had feelings not existed?


Derek,

You say, " I personally have never observed a thought, feeling, or idea, and I don't know anyone who has."

It seems plain to me that I observe thoughts and ideas as words and images in my head, and as words and images expressed by other people.  I observe feelings in my own bodily reactions to events, and in the bodily reactions of others.  I cannot believe that you do not observe any of these things.  So, when you say you do not observe thoughts, feelings, or ideas, I must conclude that you are talking about something else.  The question is, what are you talking about?

JS:: "Not at all.  I'm suggesting that "science" is a word we use to refer to whatever methodical processes lead to discovery."

DA:  "But this is not a normal use of the word science, surely? Science has its own particular methods and procedures (and assumptions)."

Actually, I think my use is quite standard, at least among scientists, though it may not normally be expressed in the terms I use.

And while it is true that individual sciences are defined by their own methods and procedures, I would not say there are any specific methods or procedures which define science as a whole.  If you disagree, I wonder what methods and procedures you have in mind.

DA:  "You seem to be suggesting that anything that leads to a discovery is science. Presumably anything that leads to a scientific discovery is science (though some have happened partly by chance) but that is a different matter."

Accidents are not scientific per say, though science can use them.  An accident can lead to a method of discovery.



2009-06-08
The 'Explanatory Gap'
JS: "It seems plain to me that I observe thoughts and ideas as words and images in my head"

I have never observed a word, except in writing (which is simply a convention). What does a word look like? Is it big, little, red, green, purple, triangular, square? Does it move around? If so with what kind of motion? Circular? Straight line? Same questions for thoughts and feelings. 

JS: 'And while it is true that individual sciences are defined by their own methods and procedures, I would not say there are any specific methods or procedures which define science as a whole."

Then you are in a very small minority, I would have thought.  Are the methods or procedures of science the same as those a novelist uses to write a novel, for example, or a composer to write a symphony, or a historian to write history?  Science was born at a particular point in history when, precisely, its methods and procedures were accepted as a valid means of knowledge. Before that there was no science.  Science is not just a synonym for knowledge (except in the minds of some philosophers of a scientistic persuasion...)

DA


  

2009-06-08
The 'Explanatory Gap'
CORTICAL COUNTERFACTUALS
JS: "Wouldn’t we say that the brain (or whatever feels feelings) would not be the same had feelings not existed?"
As I am just about as sure (modulo scepticism) that the brain causes feeling (somehow) as I am of any other apparent empirical fact, I of course agree that a brain that could not cause feeling would be a different brain!

But seconding this relatively anodyne assertion does not alter by one synapse the real problem, which is that there is no explanation of how the brain causes feeling, and even more problematically, there is no explanation of why. For whereas the brain-cause of feeling (whatever that is, and whatever way it manages to cause feeling) certainly has causal power, the feelings themselves do not, even though it feels like they do. Indeed, they cannot have causal power except if telekinetic dualism is true, and feeling constitutes a 5th fundamental force in the universe... 

So the brain causes both doing (explicably) and feeling (inexplicably), but the feeling causes nothing, and what it feels like feeling causes is really just caused by the causes of feeling, with the feeling just dangling there, ineffectually (and inexplicably).


-- SH 



2009-06-08
The 'Explanatory Gap'

HERMENEUTICS DOES NOT CLOSE THE EXPLANATORY GAP, IT JUST OBFUSCATES IT

RF: "A proper explanation of the difference between intrinsic and derived intentionality would close the explanatory gap..."

It sure would! (And your point is?...)

RF: "...feelings are reasons for action, not causes of it..." 

X: Why did you do that?

Y: Because I felt like it.

X: That's a reason?

(And, while you're at it, why are reasons felt, rather than just acted upon [i.e., functed]?)

RF: "...and the types of discourse in which these concepts [reasons and causes] occur are different..."

Discourse? Concepts? I was just asking how organisms feel, and why organisms feel...

RF: "...'why' questions are answered by giving reasons, while 'how' questions are answered in terms of causes..."

X: I wonder why the apple fell?

Y: Because of gravitational attraction.

X: So that's the reason!

And "why" is a causal, functional question too: 

"Why do arch bridges have to have an abutment at either end? To restrain the horizontal thrust." 

"How do arch bridges restrain the horizontal thrust? By having abutments at either end."

And if feeling is indeed noncausal, then substituting "reasons" for causes hangs from a skyhook rather like the Cheshire Cat's smile, by way of explanation. 

(Explanations, incidentally, unlike interpretations, are not immune to objective refutation. And what is at issue in the case of the explanatory gap is a causal explanation of feeling, not a social or linguistic interpretation.)

RF: "Reasons are appropriate in social, intersubjective contexts, while causes occur in mechanistic, technical or scientific narratives..."

So why do worms feel? For social, intersubjective reasons? What's their "narrative"?

RF: "That's why we have the social sciences, and the arts and humanities..."

But without the brain (and its causal powers) the social sciences, arts and humanities would not have us...

-- SH


2009-06-10
The 'Explanatory Gap'
Reply to Derek Allan
Stevan,

Let me rephrase that last question, since its point was apparently unclear:  Wouldn't whatever feels feelings (in some particular instance) be different if feelings had not been felt (in that particular instance)?

To explain the purpose of this question, consider another:  Do you distinguish between X (that which causes a feeling and Y (in which feelings are felt)?

If so, then "feelings" are events which are not simply causes of feelings; and whatever feels feelings (in some particular case) would be different had these events not occured.

If not, then the term "feelings" refers to the causes of feelings, and those are what you term "functing."


Derek,

DA:  "I have never observed a word, except in writing (which is simply a convention)."

Why do you assume that sight is the only available form of observation?

DA:  "Are the methods or procedures of science the same as those a novelist uses to write a novel, for example, or a composer to write a symphony, or a historian to write history?" 

I wouldn't say writing a novel or composing a symphony were acts of discovery.  As for history, one could argue that historians can be scientific in their methodology to varying degrees.

DA:  "Science was born at a particular point in history when, precisely, its methods and procedures were accepted as a valid means of knowledge."

Again, my question:  What methods and procedures are you talking about?


2009-06-10
The 'Explanatory Gap'
Reply to Stevan Harnad
RF: "A proper explanation of the difference between intrinsic and derived intentionality would close the explanatory gap..."

SH: It sure would! (And your point is?...)

To point out that your purported explanation of the difference between intrinsic and derived intentionality is in fact no such thing.

RF: "...feelings are reasons for action, not causes of it..." 
SH: X: Why did you do ...

that?

SH: Y: Because I felt like it.

SH: X: That's a reason?

OK, I can see that linguistic usage is too sloppy to carry that load.

RF: "Reasons are appropriate in social, intersubjective contexts, while causes occur in mechanistic, technical or scientific narratives..."

SH: So why do worms feel? For social, intersubjective reasons? What's their "narrative"?

I'm sure worms have no narrative. They just take things as they come. The relevant narrative is your's. Why do you suppose that worms feel?

Please note, I'm not saying, and wouldn't say, that worms don't feel. What I'm saying is your supposition that they do tells us more about you. And I'd go further and say it seems to me that the best way to close the explanatory gap is to take the view that whether worms do or do not feel is a matter not of fact, but of opinion. To suppose that they feel is to admit them to the roster of sentient entities, to accept that they are, in a certain touchy-feely sort of way, one of us, to identify with them. So what that attribution actually denotes is a quality of the relationship between you and worms: you would think twice, I guess (as would I), before cutting one in half to test which bits survive (if any). That's what the attribution of consciousness is all about: the things we consider conscious are those with which (whom) we have a certain sort of relationship, those with whom we identify (or at least with whom we believe identification is in principle possible) -- though of course that's a great deal more significant when we're considering other human beings.

But let's look at machines, specifically, what you call a Turing-Test (TT) scale robot. Consider this scenario: such a robot is designed and constructed, but with the abilities of a very young child, including the ability to learn and generally to develop as we do. My contention is that this robot will learn to use the concepts of feeling, consciousness, etc, just exactly as humans do, and in particular, it might easily assent to the Cogito and/or the Sentio -- in principle, it might even independently come to such a conclusion, just as Descartes did. If you think there would be any relevant difference between the robot's eventual capacities and that of the normal human adult, I'd be very interested to hear about it. If you say no, such a robot will actually feel, then I'd like to know what is the difference between actually feeling and being able to use the word "feel", and otherwise reacting appropriately to stimulation, exactly as you do. But note: you're not allowed to imagine you're the robot, because that's begging the question, by assuming that it does feel (or, if you try to imagine nothingness, assuming that it does not).

[Edit: perhaps I should specified a Total TT-passing robot there.]

Why would I hesitate before chopping a worm in half, even though I consider its capacity to suffer "merely" a matter of opinion? Because some opinions are better than others -- but for moral reasons, not factual ones. To take this view is not to suppose that worms and people are really zombies. It's just to recognise that our knowledge/belief/whatever that we are not zombies is intersubjective, instrumental, a truth of the coherent rather than the correspondent variety. Of course, some of us might also have to reconsider the significance of instrumentalism, the coherence theory of truth, intersubjectivity, etc, etc. Oh, what tangled webs we weave!


2009-06-10
The 'Explanatory Gap'
Reply to Stevan Harnad
DH: "For whereas the brain-cause of feeling (whatever that is, and whatever way it manages to cause feeling) certainly has causal power, the feelings themselves do not, even though it feels like they do.

Stevan, it appears to me that you are a mind-brain dualist in denial. If the brain causes feelings, then feelings must be a particular kind of biophysical process that has causal powers like all biophysical processes. When you claim that feelings have no causal power, you clearly have the burden of explaining how feelings can be physically caused yet play no causal role in the physical universe.


.. AT



2009-06-10
The 'Explanatory Gap'
Please note the following correction.  (I am apparently unable to edit my posts, even immediately after submitting them.)

I wrote:  "To explain the purpose of this question, consider another:  Do you distinguish between X (that which causes a feeling and Y (in which feelings are felt)?"

That should read:  "To explain the purpose of this question, consider another:  Do you distinguish between event X (which causes a particular feeling) and event Y (in which that particular feeling is felt)?"

2009-06-10
The 'Explanatory Gap'
Reply to Stevan Harnad
Something that is puzzling me about this discussion is the absence of any treatment of how anything causes anything.  Take a really simple cause and effect like a moving steel ball hitting a stationary ball, what 'causes' the stationary ball to move?  To help with this analysis consider two stationary balls touching each other, one of which is hit by a moving ball, at a millisecond before impact what is the difference at that instant between the moving ball that 'causes' subsequent motion and the stationary ball that just remains in place?  If we cannot explain this difference, if we have no real knowledge of the nature of 'cause', how can we debate reasons versus causes?  If we can explain the difference then the explanation will need to go beyond simple successions of 3D positions and this explanation will certainly need to be incorporated into the debate.

2009-06-10
The 'Explanatory Gap'
JS: "Why do you assume that sight is the only available form of observation?"

I don't. But if, as you say, you 'observe' thoughts, ideas and feelings then, whatever form of observation you are using, you must presumably be able to answer my questions: Are they big, little, red, green, purple, triangular, square? Do they move around? If so with what kind of motion? Circular? Straight line?  In other words, if you can observe them, they must presumably be physical objects. I am just curious about their physical attributes.

JS: "I wouldn't say writing a novel or composing a symphony were acts of discovery.  As for history, one could argue that historians can be scientific in their methodology to varying degrees."

Artists and historians do not deal in scientific discoveries; but science certainly does not have a monopoly on discovering things.

Let's try a more mundane example. When you realize that someone you know is not the pain in the neck you thought he was, is that not a 'discovery'?  Everyday life is full of discoveries; indeed life would be unimaginable if that weren't so. (We would need to know everything there is to know from the word go.)  

JS:" Again, my question: What methods and procedures are you talking about?"

Since Bacon, tomes have been written on the methods and procedures of science and the debate still rages - indeed it is perhaps stormier now than it ever was. I am not going to venture into that territory here. But the assumption behind your question seems to be that there is no other means of valid knowledge except science. This has a name in philosophy - it's called "scientism"...

DA



2009-06-10
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "it appears to me that you are a mind-brain dualist in denial. If the brain causes feelings, then feelings must be a particular kind of biophysical process that has causal powers like all biophysical processes. When you claim that feelings have no causal power, you clearly have the burden of explaining how feelings can be physically caused yet play no causal role in the physical universe."
Well that was easy! Here we were, thinking there might be a special problem about explaining how and why we feel. And it turns out that all you need to do is say "they're a biophysical process" and the problem's solved: We can go back to describing the brain correlates of feeling and that's all there is and ever was to it. Just as if we had said "the brain causes moving." Interesting that no one ever thought there was a special problem about explaining how and why we move...


-- SH

2009-06-11
The 'Explanatory Gap'

Quantum Mechanical Voodoo


JWKMM: "Something that is puzzling me about this discussion is the absence of any treatment of how anything causes anything."
No, the explanatory gap is not in the explanation of causation; it is in the explanation of how and why we feel rather than just funct.

(This discussion thread is predictably resurrecting and recycling the usual rationalizations and red herrings that keep papering over the explanatory gap. Perhaps it will help to name and identify them. This one is QM voodoo...)


-- SH

2009-06-11
The 'Explanatory Gap'

ON NOT COUNTING ONE'S EXPLANATORY CHICKENS BEFORE THEIR EGGS ARE LAID


JS: "Wouldn't whatever feels feelings (in some particular instance) be different if feelings had not been felt (in that particular instance)?'
This is putting the analytical cart before the explanatory horse. No one has a clue of a clue as to how or why the brain causes feelings. So it does not advance our (non)understanding and our (non)explanation of that fact in any way to start doing an a-priori partition of the nonexistent "components" of that nonexistent explanation.

Keep it simple: We don't know how or why the brain causes feelings. We (rightly) assume it does, somehow. Your a-priori partition does not dispel or lighten the mystery of how or why the brain does that; nor does it carry understanding forward by one nanometer. (It probably spuriously multiplies the mystery, by implying that there is both a feeler and a feeling to account for.) 

The question is how and why. Don't count (as John Searle used to say): Explain (as I would add). 

Nor does it help to try to squeeze extra causality out of an unexplicated causal assumption -- the assumption that surely the brain causes feeling, somehow or other (as we all agree, whether we admit it or not). There is nevertheless a gap there that only a genuine causal explanation (or even a coherent causal hypothesis) can fill. 

When I say that feelings have no causal power, all I mean is that the telekinetic power we all quite naturally feel they have -- "I did it because I felt like it" -- is just felt causality, and cannot be true, because telekinesis is not true. Hence what got done was not caused to get done by my feeling. It was caused by whatever in my brain (somehow) also caused my feeling. That leaves the fact of my feeling the unexplained causal dangler it has been all along.

If it is the notion of a brain structure or process causing feeling rather than (somehow) just "being" feeling (as Arnold Trehub would prefer it) that bothers you, let it be "being" then: How and why is it that some brain structures or processes are felt structures or processed rather than just functed structures or processes (like the rest)? Yes, they are felt. The question is (and remains) how and why.
JS: "[If] you distinguish between event X (which causes a particular feeling) and event Y (in which that particular feeling is felt)... 'feelings' are events which are not simply causes of feelings; and whatever feels feelings (in some particular case) would be different had these events not occurred. If not, then the term 'feelings' refers to the causes of feelings, and those are what you term 'functing'."
Jason, you may feel you are making some explanatory headway from such vague hypothetical conditionals: I feel they are just juggling the non-contents of a completely empty explanatory box, insofar as the question of how and why we feel is concerned...


-- SH



2009-06-11
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Here we were, thinking there might be a special problem about explaining how and why we feel. And it turns out that all you need to do is say "they're a biophysical process" and the problem's solved: We can go back to describing the brain correlates of feeling and that's all there is and ever was to it."

There is a special problem in explaining how and why we feel. But the solution is not provided by "describing the brain correlates of feeling". The solution is provided by detailing the structure and dynamics of the brain mechanisms that generate a transparent representation of the world from a privileged egocentric perspective. This is something that no existing artifact is able to do, and this is what constitutes consciousness/feeling. Working out the details of these biophysical mechanisms is a hard problem.

Another very hard problem is explaining why it is commonly felt that feelings cannot be caused by the biological activity of brain mechanisms, and why some feel (e.g., Stevan) that feelings have no causal powers. 

A particularly difficult problem (an explanatory gap?) confronts those who do believe that feelings are caused by the brain but deny the causal power of feelings.


.. AT 



2009-06-11
The 'Explanatory Gap'
Reply to Arnold Trehub

ON THE FUNCTIONAL INDISTINGUISHABILITY OF FUNCTIONAL INDISTINGUISHABLES


AT: "The solution [to the] special problem in explaining how and why we feel... is provided by detailing the structure and dynamics of the brain mechanisms that generate a transparent representation of the world from a privileged egocentric perspective."
I hope that when you disclose the solution, Arnold, you will also disclose how and why a "transparent representation of the world from a privileged egocentric perspective" is a felt "transparent representation of the world from a privileged egocentric perspective" rather than just a functed "transparent representation of the world from a privileged egocentric perspective"... 

That, alas, is the real explanatory gap, which is not just a matter of explaining how and why the neural correlates of a "transparent representation of the world from a privileged egocentric perspective" are indeed generating a "transparent representation of the world from a privileged egocentric perspective." That's not where the problem lies.

For, on the face of it, such a "representation" would appear to be precisely as functional and adaptive for a feelingless Darwinian survival machine that is otherwise much like (indeed, Turing-Indistinguishable from) ourselves. 
Or at least explain how and why there could not be a Darwinian survival machine with a "transparent representation of the world from a privileged egocentric perspective" unless the "transparent representation of the world from a privileged egocentric perspective" was felt.  

In other words, (just as in all perpetual motion machine candidates to date!) something still seems to be missing here: why and how is your "transparent representation" felt rather than just functed?

And while you're at it, I hope you'll also explain how and why worms and slugs feel "ouch" too, if you pinch them... (If you deny that they feel, my prediction is that you will be denying many of the neural correlates of feeling in us too.) Ditto for profoundly demented and near-comatose Alzheimer's patients who no longer have much of a  "transparent representation of the world from a privileged egocentric perspective" but still hurt if you pinch them.)


-- SH





2009-06-12
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "No, the explanatory gap is not in the explanation of causation; it is in the explanation of how and why we feel rather than just funct."

It's becoming clear to me now that, in large measure, the explanatory gap is explaining what the explanatory gap is.  In other words, people seem mightily confused not only by what the solution might be, but by what the problem is.

(I think everyone knows my answer by now: I think the explanatory gap is very possibly not a gap at all but a huge, yawning abyss. Or a road sign saying 'dead end'...)

DA

2009-06-12
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "And "why" is a causal, functional question too: "

It seems to me that you are discriminating between 'causal' functions and 'feelings' without having first explained the physical nature of causation.  I say 'physical' nature because just describing causation as something that precedes something else does not explain how it causes something else.

SH: ".the explanatory gap.. is in the explanation of how and why we feel rather than just funct"

As above, you have not defined 'funct' so cannot discriminate it from 'feel'.  Functions are manifest as successions of events over time so to define funct we will need to have some understanding of time. At the least we will need to declare whether we are presentists or four-dimensionalists.  If we are presentists we might believe that only the present instant, of no duration, exists. The presentist can probably deal with your 'functs' but will deny your feelings because you couldn't possibly feel something in no time at all.  If we were ideological presentists we would simply deny that we had feelings and call them "folk psychology".  However, if we are four dimensionalists we might believe that what presentists call 'causal chains' are actually sets of events laid out in time.  The four dimensionalist would need your 'feelings', which change in time, to breathe life into a causal chain.  Are you a presentist or a four dimensionalist or will you declare that considering the foundations of physics is "voodoo" even when considering a physical problem such as the nature of functions?

Incidently, declaring whether or not you are a presentist would save a lot of time in these debates on philosophy of mind. An inner life is pretty unlikely if you have no time to enjoy it!

2009-06-12
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Or at least explain how and why there could not be a Darwinian survival machine with a "transparent representation of the world from a privileged egocentric perspective" unless the "transparent representation of the world from a privileged egocentric perspective" was felt."

It took me a while to parse this challenge, but it seems to be asking me why a machine with a "transparent representation of the world from a privileged egocentric perspective" (let's say TROWPEP) would be unable to survive in a Darwinian world unless its TROWPEP is actually felt. My answer is that since its TROWPEP is its feeling/consciousness, it already has feeling and has no need to feel its feeling. Put another way, if it didn't have feeling/consciousness it wouldn't have TROWPEP. Moreover, TROWPEP is orthogonal to Darwinian survival. All kinds of lower organisms without TROWPEP survive very well in their ecological niches. Stevan argues that an organism/machine could have TROWPEP without feeling, but he has only his own feeling about feeling to back him up.


SH: "In other words, (just as in all perpetual motion machine candidates to date!) something still seems to be missing here .."

Not so. Perpetual motion machines violate well established physical principles. The retinoid mechanism that gives us TROWPEP violates no physical principles. What seems to be missing for you is a mysterious something that you feel just cannot be a biophysical function.


SH: "And while you're at it, I hope you'll also explain how and why worms and slugs feel "ouch" too, if you pinch them... (If you deny that they feel, my prediction is that you will be denying many of the neural correlates of feeling in us too.)"

You make my case. I do deny that worms and slugs feel "ouch". The reason is that worms and slugs don't have TROWPEP; they can have no representation of the world because they don't have a retinoid system. The reflexive mechanisms in worms and slugs have their counterpart in human neurophysiology and neuroanatomy, but these mechanisms are not a part of the retinoid system and do not play an essential role in the neuronal constitution of feeling/consciousness. So your prediction is wrong. 


... AT

2009-06-12
The 'Explanatory Gap'
Reply to Arnold Trehub
SH: "Or at least explain how and why there could not be a Darwinian survival machine with a 'transparent representation of the world from a privileged egocentric perspective' unless the 'transparent representation of the world from a privileged egocentric perspective' was felt."
AT: "My answer is that since its ['transparent representation of the world from a privileged egocentric perspective'] is its feeling..., it already has feeling and has no need to feel its feeling... [I]f it didn't have feeling... it wouldn't have ['transparent representation of the world from a privileged egocentric perspective']... [and] I do deny that worms and slugs feel... The reason is that worms and slugs don't have ['transparent representation of the world from a privileged egocentric perspective']."
I guess that settles it then... Your theory is right by definition. No need to explain any further...

-- SH



2009-06-13
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "...transparent representation of the world from a privileged egocentric perspective.."

Can you define "transparent" in physical terms and "egocentric perspective" in geometrical and physical terms so that these concepts can be used in a scientific hypothesis for investigating the brain?  What do we see when we see 'black', for instance in a dark room with a blindfold? If I hold up a sheet of paper to the world it is white because there are few natural images, images occur in instruments like the eye. How does the image that has perspective provide an almost immediate sense of depth and how does it occur simultaneously rather than as a succession of points?

2009-06-13
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "I guess that settles it then... Your theory is right by definition. No need to explain any further..."

Not so. You fail to distinguish between my definition of consciousness/feeling that is the target of my theory, and my theoretical model; i.e.the retinoid system that I have proposed to explain consciousness/feeling. My theoretical model is not "right by definition". Its validity depends in large measure on the weight of experimental and clinical evidence supporting or negating the explanations and predictions that follow from the retinoid model.

If you disagree with my definition of consciousness/feeling, please provide us with your preferred definition.


... AT 

2009-06-13
The 'Explanatory Gap'
Reply to Stevan Harnad

"Feelings" 
by Morris Albert

TROWPEP, nothing more than TROWPEP,
Trying to forget my TROWPEP of love.
Teardrops rolling down on my face,
Trying to forget my TROWPEP of love.

TROWPEP, for all my life I'll TROWPEP it.
I wish I've never met you, girl;
You'll Never Come Again.

TROWPEP, wo-o-o TROWPEP,
Wo-o-o, feel you again in my arms.

TROWPEP, TROWPEP like I've never lost you
And TROWPEP like i've never have you
Again in my heart.

TROWPEP, for all my life I'll TROWPEP it.
I wish I've never met you, girl;
You'll never come again.

TROWPEP, TROWPEP like I've never lost you
And feelings like i've never have you
Again in my life.

TROWPEP, wo-o-o TROWPEP,
Wo-o-o, TROWPEP again in my arms.

Feelings...

(repeat&fade)

2009-06-13
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "If you disagree with my definition of... feeling, please provide us with your preferred definition."

(1) Everyone knows what feeling is, as they have all felt. They no more need a definition of feeling than they need a definition of green.

(2) The problem is not defining feeling but explaining it: How and why are some functions felt rather than just functed?

(3) Your "theory" would simply make that into a nonproblem -- by definition.

(4) That's not problem-solving; it is question-begging.


-- SH




2009-06-13
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Everyone knows what feeling is, as they have all felt. They no more need a definition of feeling than they need a definition of green."

Excuse my interrupting but statements of this kind seem to me to be where so much of this debate goes wrong.

We only "know what feeling is" in a vague, "experiential" way. That is quite different from the the kind of knowing required for philosophical analysis. I '"know what it is" to feel angry or sad etc, but that in no sense tells me what a feeling is.  If I knew that, I could, for example,say quite clearly how a feeling differs from a thought. I "have" feelings and thoughts all day long, but saying how they differ, and what each is is a quite another matter.  If anyone doubts that, I invite them to try.

I read an essay recently that seemed to be seriously proposing that in analyzing consciousness we should take 'experience' as a kind of self-evident, base concept that needs no further defining, and then go on from there. This is the same basic error.  The assumption again is that since we all "have" experiences we must "know what they are".  If that approach were really valid, why not go the whole hog and say: "We are all conscious, therefore we must know what consciousness is". There would be no further need for philosophical discussion of the matter. Case closed. Move on to the next topic.

DA   

2009-06-13
The 'Explanatory Gap'
Reply to Derek Allan

ON FEELING, FALLING, "DEFINING" AND EXPLAINING


DA: "If that approach were really valid, why not go the whole hog and say: 'We are all conscious, therefore we must know what consciousness is'. There would be no further need for philosophical discussion of the matter. Case closed."
"If that approach were really valid, why not go the whole hog and say: 'We all know what (an apple) falling is'... no further need for philosophical discussion of the matter. Case closed."

Because what is needed is an explanation (e.g., gravity) of the datum, not a "definition."

(In mathematics, you first prove your theorem, and then you formulate a definition. In science you first explain your datum, and then you formulate a definition. Definitions don't explain. An ostensive "definition" of the datum is more than enough to get you started on an explanation -- if, that is, you have an explanation...)

Derek, please do not expect a further response from me if your only rejoinder is the one you keep repeating -- about first needing to "define" the datum (consciousness). We've closed the circle on that one enough times already. No new information is being transmitted in either direction.


-- SH



2009-06-14
The 'Explanatory Gap'
Reply to Stevan Harnad
DA: "(In mathematics, you first prove your theorem, and then you formulate a definition. In science you first explain your datum, and then you formulate a definition. Definitions don't explain. An ostensive "definition" of the datum is more than enough to get you started on an explanation -- if, that is, you have an explanation...)"

Not being a mathematician or a scientist (only a humble philosopher - though admittedly the line between the two seems increasingly blurred these days ...) I hesitate to comment, but I will anyway. 

Let's take an example - Newton's theory of gravitation.  If no one before him had described/defined the problem - if Copernicus, Brahe, Kepler, Galileo etc had not done their observations and described (more or less) how the planets moved - what could Newton have theorized about?   It's essentially the same situation here. If we can't even say what consciousness is  - if we can't describe/define it - how can we even begin to develop a plausible theory about it?

Once he had developed his theory, Newton and others knew it was a good one because it explained phenomena that had already been described/defined in considerable detail. But how would we know if a theory of consciousness were sound or not if we couldn't even describe what it is supposed to explain? (And things like the famous Nagel 'insight' certainly don't fill that bill).

In short, one could go on and on, ad infinitum, about neurons, synapses, causes, effects, feelings etc (as in recent exchanges I've read on this thread) but unless one can first describe fully and convincingly what one is theorizing about the whole thing, it seems to me, is futile. (This comment, I should say, also applies to lots of other stuff I've read on the topic. There seems to be a widespread reluctance in this area to try to say fully and clearly what the notion of human consciousness might mean. But it is the skeleton in the cupboard and it's not going to go away...)

DA



2009-06-14
The 'Explanatory Gap'
Reply to Derek Allan
Sorry. The 'DA' at the beginning of my last should have been 'SH'.

2009-06-14
The 'Explanatory Gap'
Thanks for the catchy lyrics, Victor.

... AT




2009-06-14
The 'Explanatory Gap'
Reply to Stevan Harnad
SH: "Because what is needed is an explanation (e.g., gravity) of the datum [an apple falling] , not a "definition."

Gravity per se is a fundamental force that has not been explained, just as consciousness/feeling per se (its sheer existence) has not been explained and probably cannot be explained, as I have stated earlier in this thread.

Stevan, I agree with you that we need an explanation of the datum, but consciousness is not a datum. However, instances of consciousness are data, and these are what need to be explained and, I believe, can be explained. I prefer to call such instances of consciousness "phenomena". The retinoid theory provides a principled biophysical explanation of phenomena (what you call feelings), and elucidates the difference between phenomena and unconscious cognitive processes in the brain. In my theoretical analysis it turns out that all phenomena share the common feature of being a transparent representation of the world from a privileged egocentric perspective (TROWPEP). So I take this observation as the justification for my definition of consciousness (TROWPEP). Let's be clear. My definition of consciousness is not an explanation. The retinoid model is my explanation of consciousness and it motivates my definition.


... AT 




2009-06-14
The 'Explanatory Gap'
Reply to Arnold Trehub

Hi Arnold.  Your theory on consciousness, as far as I can tell, follows today's scientific paradigm of mind.  Your views seem to follow Chalmers' catagory #1 above (post 2):
(1) There's no explanatory gap, or one that's fairly easily closable. 

Your theory assumes that the phenomenon of consciousness is an emergent phenomenon as you state here:

AT: ... separate brain cells (things) do not individually realize consciousness, rather it is the collective activity of such cells when they are organized into the right kind of brain mechanism (e.g., the retinoid mechanism) that realizes the property of consciousness.

Further, you may be supporting strong AI here:

AT: For example, transistors, capacitors, resistors, and inductors are just "things". But when they are organized into the right kind of mechanism they constitute a radio receiver, something that realizes a property that the separate components cannot realize.

By supporting mental causation, I believe one is also forced to support downward causation.  I’d be curious to see how you might maneuver around this.  Theories of how neurons and transistors function are built on the premise that such things change state because of local causal forces acting on them.  In the case of a transistor, there is a charge applied which forces it to change state.  In the case of a neuron, the interactions at the synapse cause it to fire.  I’m sure that description can be elaborated, but let’s not argue details.  The point is that there can’t be a single neuron, nor a single transistor that is actually affected by anything but the local causal forces acting on it, and those local causal forces are sufficient to explain the behavior of the system.  And if we agree that this is true, then there can’t be any set of neurons or transistors that are affected by anything except the local causal forces acting on each of the neurons or transistors. 

To suggest that the emergent phenomenon of consciousness somehow influences some portion of the brain, or makes the entire brain change state requires that one theorize that these local causal forces are insufficient to describe what that system does.  In other words, the emergent phenomenon of consciousness must intervene and cause a neuron or transistor to do something which is not determined by these local causal forces. 

Certainly, we know there are no such nonlocal causal forces acting on a transistor for example.  None of the transistors in a computational system, regardless of its complexity, changes state except because of these local causal forces.  If nonlocal causal forces can not act on any of the transistors, if downward causation is not allowed, then there is no room for this emergent phenomenon of consciousness to enter the causal chain, either for transistors or neurons.  The cause of neurons firing is fully explained by examining the physical interactions, and these physical interactions are not influenced by emergent, phenomenal properties.

What is more interesting however, is the claim that mental causation doesn’t enter the causal chain, but it is nevertheless reliably reported.  I would argue that this is also incorrect for exactly the same reason.  If the emergent phenomenon of consciousness can’t change any single transistor, and can’t force a neuron to fire, then those transistors and neurons are not able to report any phenomenal experience either!  They don’t fire because they are reporting an emergent phenomenon such as the experience of pain.  They fire because there are local causal forces acting on them.  There is no room in the physical world (at a classical level) for transistors or neurons to change state because the system they reside within has produced any allegedly emergent phenomenon.  Please note, this only applies to classical interactions and not quantum ones.  Quantum mechanical systems such as molecules, are holistic and can not be broken down using boundary conditions as we do classical mechanical systems.  The paradigm I’m using here only works at a classical level.

How can one claim mental causation doesn’t exist, yet the emergent phenomenon is reporting something about its existance?  The only rebuttal I see is that these two things are purely coincidental.  The report of pain accidentally occurs at the same time the emergent phenomenon is occurring.  The report of pain isn’t caused by the emergent phenomenon, it is reported because of local causal actions acting on neurons.  The experience could be anything.  It could be an orgasm or it could be nothing (p-zombie), but the report of pain and simultaneous behavior could not change.  Imagine for one moment, that instead of pain, we felt an orgasm when sticking our hand in a pot of boiling water.  The neurons change state not because we are in pain or having an orgasm, they change state because of local interactions, and these local interactions are utterly oblivious to, and independent from, any phenomenal experience that might emerge.  It is the local, causal influences of neurons that report the pain and produce the aversion to it, not the allegedly emergent phenomenal experience which is reporting anything about itself.  The overall physical state can not intervene at the local level. 

The claim that an experience is being reliably reported must address the same problem that mental causation must address – how is the emergent phenomenon able to intervene on the local, physical level.  The emergent property can’t influence anything physical as long as there are local, causal actions on which all physical interactions can be pinned.  That said, I see no way for emergent mental causation to intervene in any classical mechanical system, so computationalism does not allow experience to be reliably reported.  What I would be interested in, is how your theory (or any computationalist theory for that matter) can allow for mental causation, and also how any such theory can allow for experience to be reliably reported (ie: the report of an experience is a true report that is dependant on the overall emergent phenomenon).


2009-06-14
The 'Explanatory Gap'
Reply to Arnold Trehub
AT: "Gravity... is a fundamental force that has not been explained, just as... feeling... has not been explained"
But feeling is not a fundamental force, otherwise that would be telekinesis, and telekinesis is false, because contradicted by all evidence. Hence we are entitled to expect an explanation -- and obligated to admit we haven't one: The explanatory gap.


-- SH

2009-06-15
The 'Explanatory Gap'
Reply to Stevan Harnad
AT: "Gravity per se is a fundamental force that has not been explained, just as consciousness/feeling per se (its sheer existence) has not been explained and probably cannot be explained, as I have stated earlier in this thread."

SH: "AT: 'Gravity... is a fundamental force that has not been explained, just as... feeling... has not been explained' "

SH: "But feeling is not a fundamental force ... Hence we are entitled to expect an explanation -- and obligated to admit we haven't one: The explanatory gap.


What I said, as you can see above, is that the sheer existence of consciousness/feeling, like gravity, has not been explained and probably cannot be explained. Why do you insist that I admit what I have admitted from the very beginning of this thread? 

Having said this, my claim is that what we might call the content of consciousness, namely phenomena, can be explained, and I have proposed an explanatory theory (the retinoid model).

Question: Are we justified in saying that consciousness exists without content? In other words, if a person has no phenomenal content, can we say that that person is conscious? Or is "consciousness" just a word that points to any and all instances of phenomenal content?


... AT   






2009-06-15
The 'Explanatory Gap'
Reply to Derek Allan

Derek,

DA:  “if, as you say, you 'observe' thoughts, ideas and feelings then, whatever form of observation you are using, you must presumably be able to answer my questions: Are they big, little, red, green, purple, triangular, square? Do they move around? If so with what kind of motion? Circular? Straight line?  In other words, if you can observe them, they must presumably be physical objects. I am just curious about their physical attributes.”

Consider, by way of comparison, the physical attributes of Microsoft Word.  Most people only observe Word when they are using it.  If we were to describe our observations of MS Word (something which, I assume, we can all agree exists and has physical properties), we might be most comfortable talking about particular instances in which MS Word is used.  Yet, to observe the use of a computer program is not the only way to observe it, nor does it provide a particularly perspicuous view of the program. 

Programmers and engineers observe Word in other ways:  by looking at the programming code, which can be done in a number of ways; or by looking at the physical properties of computers or storage devices through which the program is stored and run.

Such it is with thoughts, ideas, and feelings.  We usually observe them by using them.  You may observe the idea of democracy by watching a presidential election unfold, or by reading a book about democracy, or by arguing about the topic with colleagues.  Compare the observations you make in these cases to the observations which tell you that Microsoft Word exists.  How might an analysis of the properties which make elections, books, and discussions possible differ from an analysis of the properties of a machine running a computer program?

I am not saying feelings or thoughts are in all respects equivalent to computer programs.  I am only suggesting that there are similarities in the ways we observe and analyze them, and that descriptions of our observations of feelings, thoughts, and ideas are not as simple as your questions suggest.

DA:  “science certainly does not have a monopoly on discovering things.”

I never said it did.  I wouldn’t say science discovers anything.  People discover; and when they formalize methods and procedures for discovery, we (or at least some of us) call it science.

DA:  “When you realize that someone you know is not the pain in the neck you thought he was, is that not a 'discovery'?  Everyday life is full of discoveries; indeed life would be unimaginable if that weren't so.”

Yes, but you seem only intent on convincing me that discoveries aren’t always scientific.  Again, that was never in dispute.  You must have forgotten that I have already acknowledged that discoveries can be accidental.

JS" Again, my question: What methods and procedures are you talking about?"

DA:  “Since Bacon, tomes have been written on the methods and procedures of science and the debate still rages - indeed it is perhaps stormier now than it ever was. I am not going to venture into that territory here.”

So you agree that there is no clear set of methods or procedures that define science.  In that case, why do you claim that science has a particular set of methods and procedures?

DA:  “the assumption behind your question seems to be that there is no other means of valid knowledge except science. This has a name in philosophy - it's called ‘scientism’...”

My question was not motivated by any such assumption.  It was a justifiable response to your claim that science began at that specific point in history when its methods and procedures were accepted as knowledge.  You referred to specific methods and procedures.  I have every right to ask what you meant without being accused of making unnecessary assumptions.

In any case, you now seem to be taking a different view, claiming that there is no agreed upon set of procedures or methods which define science.  So I will gladly retract my question; however, this means that your argument against my view is no longer extant.

Incidentally, I would agree with a revised formulation of your earlier claim.  You said science began when its methods and procedures were accepted as knowledge.  I would instead say that specific sciences begin when their methods and procedures for discovery are adopted.  This reformulation differs from your earlier claim in two important respects:  First, it does not assume that there is a single set of methods or procedures which define science as a whole, or which predetermine what science can or cannot discover; second, by replacing “accepted” with “adopted,” I am viewing science as an activity in which methods and procedures are used, and not passively accepted as true.

Of course, none of this touches on what constitutes discovery as such.  To address that question, we must turn our attention more directly towards the issue of behavior.

2009-06-15
The 'Explanatory Gap'
Reply to Arnold Trehub

HOW AND WHY DO APPLES FALL -- AND PEOPLE FEEL?


AT: "Gravity per se is a fundamental force that has not been explained, just as consciousness/feeling per se (its sheer existence) has not been explained and probably cannot be explained, as I have stated earlier in this thread."
SH:  -- "AT: 'Gravity... is a fundamental force that has not been explained, just as... feeling... has not been explained' "
SH: "But feeling is not a fundamental force ... Hence we are entitled to expect an explanation -- and obligated to admit we haven't one: The explanatory gap." 
AT: "What I said, as you can see above, is that the sheer existence of consciousness/feeling, like gravity, has not been explained and probably cannot be explained. Why do you insist that I admit what I have admitted from the very beginning of this thread?" 
My elisions were intentional (replacing "consciousness/feeling" with "feeling" and leaving out the rest). 

My point was that we are not entitled to say that "How and why do people feel?" is inexplicable in the same sense that "How and why is there gravity?" is inexplicable.

Feeling, unlike gravity, is not a fundamental force, a primitive explanatory "given" that can then be used in explaining other things caused by it. Feeling is more like falling. The answer to "How and why do apples fall?" is "because of gravity (etc.)," but the answer to "How and why do people feel?" is... an explanatory gap.
AT: Having said this, my claim is that what we might call the content of consciousness, namely phenomena, can be explained, and I have proposed an explanatory theory (the retinoid model).
Again, the deconstruction is instructive: "...my claim is that what we [feel], namely [feelings], can be explained, and I have proposed an explanatory theory..."

Your theory explains the functional correlates of feeling. We already know what we feel. What we had wanted to know was how and why...
AT: Question: Are we justified in saying that consciousness exists without content? In other words, if a person has no phenomenal content, can we say that that person is conscious? Or is "consciousness" just a word that points to any and all instances of phenomenal content?
Deconstruction: "...if a person has no [felt feeling], can we say that that person [feels]?..." 

Please see earlier threads on "uncomplemented categories" and what it feels like to feel nothing.


-- SH





2009-06-16
The 'Explanatory Gap'
Hi Jason

I must say I don't really see the comparison with a computer program as very helpful.  I don't doubt that a program converts one form of physical input to other physical forms and then to a form of physical output. But how does that help us if we have no assurance that thoughts, feelings, etc are purely physical?

You also say: "People discover; and when they formalize methods and procedures for discovery, we (or at least some of us) call it science."

I think you mean that "when [people] formalize methods and procedures for scientific discovery, we (or at least some of us) call it science."
 
You say: "Yes, but you seem only intent on convincing me that discoveries aren’t always scientific.  Again, that was never in dispute.  You must have forgotten that I have already acknowledged that discoveries can be accidental."

No I haven't forgotten.  But the contrast I am making is not between discoveries via science and discoveries by accident. It is between discoveries via science and discoveries that have nothing to do with scientific method (eg everyday discoveries - such as realizing that X is not the pain in the neck one thought he was, or that a coffee shop one likes is run by the mafia etc, etc).

You say: "So you agree that there is no clear set of methods or procedures that define science.  In that case, why do you claim that science has a particular set of methods and procedures?"

But there is no contradiction here. Some people debate (e.g.) what philosophy is; that doesn't mean that philosophy doesn't have its own methods and procedures. It just means there is debate about what they are precisely. 

You say in the same vein: "In any case, you now seem to be taking a different view, claiming that there is no agreed upon set of procedures or methods which define science."

That wasn't my point. As indicated, I was simply saying that there is debate about what those procedures are exactly (which is not really surprising, after all.)

I am not really sure what your final point is, so I will let that go for now.

DA

2009-06-17
The 'Explanatory Gap'
JM: "Can you define "transparent" in physical terms and "egocentric perspective" in geometrical and physical terms so that these concepts can be used in a scientific hypothesis for investigating the brain?"

Transparent: The biophysical activation pattern that represents the volumetric world and its occurrent events within retinoid space is composed of autaptic neurons that are spatiotopic analogs of the salient features of the real world. According to the theory, the phenomenal content of this biological plenum is not of its constituent neurons within the brain, but of the real world out there. So it's as if we have a direct phenomenal experience of the world through a transparent medium; i.e., the biophysical excitation patterns within the retinoid mechanism which are not experienced as representations, but as the real thing. 

Egocentric perspective: A phenomenal representation of the geometric properties of the world within a virtual coordinate system having the self-locus as its "point" of origin (the 0,0,0, coordinate). If we think of the retinoid system as a space-time plenum, then the self-locus would be the 0,0,0,0 coordinate.


JM: "What do we see when we see 'black', for instance in a dark room with a blindfold?"

We have the phenomenal experience that we call "black".


JM: "How does the image that has perspective provide an almost immediate sense of depth and how does it occur simultaneously rather than as a succession of points?"

A 2-dimensional image that has 3-D perspective does not always provide an almost immediate sense of depth, and the sense of depth may develop asynchronously over a complex 2-D image. The details of how this all happens involve the specific structure and dynamics of the retinoid model and, in particular, excursions of the heuristic self-locus over Z-planes in retinoid space. 

Random-dot stereograms (introduced by Julesz) are an interesting example of 2-D images providing a striking sense of depth with no perspective in the source images. The fact that the retinoid model resolves random-dot stereograms into a proper 3-D image is a strong piece of evidence in support of the validity of the model (see The Cognitive Brain, pp. 73-75).

... AT



 




2009-06-17
The 'Explanatory Gap'
Reply to Stevan Harnad
This debate is full of assumptions about physics and physicalism that are not valid.  For instance, an 'inexplicable' gravity is being incorrectly used as an example of inexplicableness:

SH: "My point was that we are not entitled to say that "How and why do people feel?" is inexplicable in the same sense that "How and why is there gravity?" is inexplicable."

Gravity has mysteries, as does everything, but , as I will argue below it is actually our unwillingness to consider how force and gravity are understood in physics, to ignore physics as 'inexplicable', that makes the 'explanatory gap' into a chasm. I will give a brief description of gravity to show how far it is not inexplicable because, as I will argue later, this is salient to the description of 'feelings'. How gravity works is known at a fairly deep level - whether a physicist hypothesises about quantum gravity or classical gravity he will agree that gravity is due to the existence of four dimensional spacetime.  In spacetime the distance an object travels in time is measured with a clock and the rate at which clocks tick varies with velocity.  A succession of different velocities results in a succession of different rates at which clocks tick so 'acceleration' is a curving path in spacetime . If the rate at which clocks tick varies over a given spatial distance there is a curve in spacetime geometry known as 'acceleration' and manifest as a 'force'.  If the curving spacetime is due to a mass the resulting force is called the 'force of gravity'.

So, how does this description of gravity apply to the description of 'feelings'?  Well, it shows that if we want a physical description of anything we must consider both space and time. If we want a physical description of 'feelings' we must describe where and when these feelings occur or seem to occur.  Do feelings occur for no time at all?  Do they occur in your head or everywhere?  Are they spatially or temporally extended?  These are simple questions.  My answers would be based on examples of feelings. 

Take the feeling evoked by a brief scratch on my arm.  This occurs at the apparent location of my arm in my experience and lasts about 2 seconds. The feeling is arranged in time at the apparent location of the scratching action in that although it seems to be 'present' "the practically cognized present is no knife-edge, but a saddle-back, with a certain breadth of its own" (William James 1890).  Take the feeling evoked by a spoken word, it occurs at the lips of the speaker and the whole word becomes present at that position in my experience.  Take my feeling of 'space'. Space to the left and right and up and down is the existence of many simultaneous things, such as areas of colour or sets of letters on a page.  For me space equals simultaneity within my experience.

Notice how feelings differ from functions, in neuroscience and philosophy functions are a succession of objects as 3D components of spacetime. When we abstractly think of functions we use our time extended feelings to knit together these instantaneous, 3D components and are in danger of fooling ourselves into thinking that this new hybrid of feelings and functions is the same as physical functions - it isnt, physical functions are successions of instantaneous forms, each having gone before the next appears whereas experience is a time extended entity.

2009-06-17
The 'Explanatory Gap'
Reply to David Chalk
DC: "If nonlocal causal forces can not act on any of the transistors, if downward causation is not allowed, then there is no room for this emergent phenomenon of consciousness to enter the causal chain, either for transistors or neurons."

Do you know of any form of emergentism that does not involve the addition of directions for arranging things?  A letter can emerge from a spray of ink drops because the drops have been placed on a 2D surface, magnetism can emerge from a flow of electric charges because of the rotation of the spatiotemporal axes between reference frames (ie: as a result of arrangements in time) etc..   Can we describe the effect of a letter 'A' on a printed sheet purely in terms of the droplets?  The letter 'A' may cause an envelope to be directed to a particular part of town by a sorting machine.  Was this a property of an ink drop?  Is magnetism a property of a charge or a property of charge plus four dimensional spacetime, if it is the latter then can we describe magnetic effects faithfully in terms of charges alone or do we need to invoke a geometrical explanation? In any of these cases the emergent phenomenon can indeed affect the world in general and downward causation is observed.

It is clearly the case that we can report on our experiences so if our experience is emergent it would be like other emergent phenomena and capable of interacting with the world.

DC: "To suggest that the emergent phenomenon of consciousness somehow influences some portion of the brain, or makes the entire brain change state requires that one theorize that these local causal forces are insufficient to describe what that system does.  In other words, the emergent phenomenon of consciousness must intervene and cause a neuron or transistor to do something which is not determined by these local causal forces."

So what does consciousness do?  It enables a large amount of brain activity that does not occur when we are 'out-cold'.  It is obviously required for this function but why and how it does it is bit of a mystery, most intriguingly it provides events simultaneously, for instance we see slabs of colour and see and feel many things at once as if viewed from a 'mind's eye'.  Dennett mocked this Cartesian Theatre but in his mockery he knew that every reader was looking at his words in that theatre.  Perhaps, instead of mockery we could combine the hunch about emergentism, the way that emergentism requires extra directions for arranging things and the Cartesian Theatre of our experience into an empirical hypothesis about conscious experience.  As an example, if we add another spatial dimension for arranging things the impossible 3D point of the mind's eye becomes a line and is no longer an impossible nexus: emergentism is usually geometrical so is there an undiscovered, multidimensional geometry that hosts conscious experience?

2009-06-17
The 'Explanatory Gap'
Reply to Derek Allan
Hi Derek,

The point about software such as MS Word was to help you understand how I think we observe mental contents (thoughts, ideas, etc.).  I asked you to compare the observations you make which tell you that the idea of democracy exists (observations you make during elections, discussions, while reading a book, etc.) with observations you make which tell you that MS Word exists (which you make while using the program, or purchasing the program, copying the program from one medium to another, and so on.)  I think that, if you carry out this experiment, you might find some interesting results.

You ask, "how does that help us if we have no ... (expand) assurance that thoughts, feelings, etc are purely physical?"

First of all, can you tell me what it means to say that something is or isn't "purely physical?"  And can you tell me what it means to have "assurance" that something is or isn't purely physical?

What assurance do you have that MS Word is purely (or even impurely) physical?

I do not think we should impose an arbitrary notion of "physical" on scientists.  To put it another way, I think the term "physical" simply means "amenable to scientific discovery."  But "scientific discovery" just means "formalized discovery."  So, to have assurance that something is physical is to have a formal method of discovering it.  The method is the assurance.  And if we have no formal method for discovering something, we cannot clearly state what that something is, so we have no justification for the claim that it won't ever be open to scientific discovery.

Do we have formal methods of discovering mental contents?  I think we do, to a limited extent.  Consider our methods for discovering the validity of mathematical theorems.  Some philosophers of science have classified mathematics as being something of a science, and this is why.  We lack strong methods for broader discoveries about mental contents; for example, we are only beginning to develop methods for discovering how mental contents are produced by the brain.  But these methods are developing as we speak.

You might say that this fact (our lack of broad, strong methods for discovering mental contents) implies that thoughts and ideas might not be physical.  But what would it meant to say that?  To say that something is not physical is only to say that it is not amenable to scientific discovery.  That does not tell us anything about what it is, but only that it is something that cannot be formally discovered.  Yet, we cannot formally discover that something is not amenable to formal discovery.  If it is true that mental contents are not physical, this fact could only be discovered accidentally, and so it could not be known with any assurance.  Thus, claiming that mental contents are not physical is not only without reason; it is unjustifiable.  It might be true that we will never develop a formal method for discovering how the brain produces mental contents, but that has no philosophical implications.

DA:  "the contrast I am making is not between discoveries via science and discoveries by accident. It is between discoveries via science and discoveries that have nothing to do with scientific method (eg everyday discoveries - such as realizing that X is not the pain in the neck one thought he was, or that a coffee shop one likes is run by the mafia etc, etc)."

What makes those discoveries unscientific?  My point is that, if such discoveries are made according to formal methods and procedures for discovery, then they are scientific.  It doesn't matter if you are discovering a planet, a species of bug, an amiable fellow, or a front for the mafia.

You misunderstand me here, when you say, "I think you mean that 'when [people] formalize methods and procedures for scientific discovery, we (or at least some of us) call it science.'" 

Scientific discovery is discovery according to formal methods and procedures.  At least, this is how scientists seem to use the term, and this view makes the most sense to me considering the history of the philosophy of science.

About that history, there is a contradiction in your argument.  You say, on the one hand, that science began at a precise moment in history when its methods and procedures were accepted as valid means of knowledge.  On the other hand, you say that the history of the philosophy of science has been marked by strong disagreement as to what those methods and procedures are.

One has to wonder just what was accepted as a valid means of knowledge, if the identity of the methods and procedures of science have always been in dispute.  If there was never a clear, broad agreement about the identity of the methods and procedures of science, then there was no historical moment in which any such methods were accepted as a valid means of knowledge.

I would suggest a different view of the history of science.  It is not a history of a specific set of methods and procedures for discovery, but a history of the many struggles to formalize discovery, starting perhaps with the development of methods for creating fire and tools. 

Perhaps much of the disagreement in the philosophy of science (over the ages) has stemmed from a faulty assumption, namely that science is defined by a particular set of methods and procedures.  When we try to think of science in this way, we are following a tradition heavily influenced by theology. 

The philosophy of science, since at least the 13th century, has often had to negotiate science with the church, attempting to differentiate science and theology as two distinct yet equally valid methods of discovery.  But this distinction is not based on any demonstrable fact, and I believe it has only hindered the development of a coherent account of discovery.

We cannot forget that the tension between the church and scientists helped define the birth of modern philosophy.  When Newton wrote his Principia, he was attempting to codify science as something distinct from but not in conflict with religious "discovery."  (I use scare quotes here because I don't think there is any such thing as religious discovery.)  He did not bring science into being by offering a set of methods and procedures.  Science did not begin when Newton's, or anyone else's, principles of scientific discovery were accepted as a valid means of knowledge.  Newton and others, like Francis Bacon, were trying to codify rules for something that existed and functioned without rules.  I think this is largely why the philosophy of science has been problematic, and why scientists should not be bothered by any of it.

The philosophy of science should help us understand and maintain the integrity of discovery, and not attempt to place a priori limitations on its potential.  This is why I am inclined to respond to any arguments for an explanatory gap with extreme skepticism.

2009-06-17
The 'Explanatory Gap'