Back    All discussions

2016-10-03
RoboMary in free fall

In foot note 3 of Daniel Dennett's  paper "What RoboMary Knows" https://ase.tufts.edu/cogstud/dennett/papers/RoboMaryfinal.htm, Dennett notes:

---

Robinson (1993) also claims that I beg the question by not honouring a distinction he declares to exist between knowing "what one would say and how one would react" and knowing "what it is like."  If there is such a distinction, it has not yet been articulated and defended, by Robinson or anybody else, so far as I know.  If Mary knows everything about what she would say and how she would react, it is far from clear that she wouldn't know what it would be like. 

---

In the paper Dennett imagines RoboMary as follows:

"1.RoboMary is a standard  Mark 19 robot, except that she was brought on line without colour vision; her video cameras are black and white, but everything else in her hardware is equipped for colour vision, which is standard in the Mark 19."

Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc. At the very least, from his response to Robinson, it is clear that he is claiming that it has not been shown that if you know what it would say and how it would react, you would know what it was like for it.  Dennett considers the following objection to his thought experiment:

"Robots don't have colour experiences!  Robots don't have qualia. This scenario isn't remotely on the same topic as the story of Mary the colour scientist."

And gives the following response:

"I suspect that many will want to endorse this objection, but they really must restrain themselves, on pain of begging the question most blatantly. Contemporary materialism-at least in my version of it-cheerfully endorses the assertion that we are robots of a sort-made of robots made of robots. Thinking in terms of robots is a useful exercise, since it removes the excuse that we don't yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgement. If materialism is true, it should be possible ("in principle!") to build a material thing-call it a robot brain-that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it, and just illustrating that assumption in their version of the Mary story.  That might be interesting as social anthropology, but is unlikely to shed any light on the science of consciousness."

Here one might straight away claim that there is a distinction between knowing how a robot will behave and knowing whose theory was correct regarding robots. Two people could know how the robot would behave, but disagree about the correct theory regarding consciousness. You could think job done, why bother continuing. But one can go further.

Let us imagine that for each camera pixel the Mark 19's eye sockets have three 8-bit channels A, B and C which are used for the light intensity encodings.  For the grey scale camera the A, B and C channel values will all be the same. But with the colour cameras what they will be will depend on the version. With RGB cameras channel A will transmit the encoded red intensity, channel B the encoded green intensity, and channel C the encoded blue intensity, but with BRG cameras channel A will transmit the blue intensity, channel B the red intensity, and channel C the green intensity.

Now consider three Mark 19 robots. Each of which is in a different brightly lit room, sitting in a chair, with all of its motors disabled, so it is unable to move any body parts including its cameras.

The first is in a white room with a red cube which its RGB cameras are looking at. These cameras are slightly unusual as they also wirelessly broadcast their signal.

The second is in a white room with a blue cube which its BRG cameras are looking at. These cameras are also slightly unusual as they also wirelessly broadcast their signal.

The third is in a room with no box, but what is plugged into its camera sockets is a receiver that switches between picking up the signals broadcast from the cameras in the first two rooms.

The processing would be the same in each case, as in each case the channel values for the box pixels (assuming no shading) would be channel A = 255, channel B = 0, channel C = 0. There seems to me to be no way for Dennett (or any other physicalist philosopher for that matter), being able to establish whether the Mark19 in the third room's experience of a box was closer to how they (the philosopher) consciously experiences a red or whether it was closer to how they would consciously experience a blue box.  If any philosopher disagrees, then I for one would be interested in how they thought they could tell.  If not, then there is another example of a distinction between knowing how something will behave, and knowing what it would be like (if it was thought to like anything at all) for a robot. 

"Knock-down refutations are rare in philosophy, and unambiguous self-refutations are even rarer, for obvious reasons, but sometimes we get lucky. Sometimes philosophers clutch an insupportable hypothesis to their bosoms and run headlong over the cliff edge. Then, like cartoon characters, they hang there in mid-air, until they notice what they have done and gravity takes over."

-Daniel Dennett 


2016-10-05
RoboMary in free fall
Reply to Glenn Spigel

RE; “Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc.”

How bizarre it is that when trying to fathom the mystery of human consciousness, so many philosophers rush straight to the idea of robots, “zombies” etc. The underlying thought seems to be: “Well, let’s find something whose outward behaviour vaguely resembles that of a human being and then try to work out if that behaviour indicates that it’s conscious in the way a human being is.” It never seems to cross the minds of those who start off down this track that one could never say anything meaningful about this unless, first of all, one could say what human consciousness is. (And of course if one could say that, one wouldn’t need to think about robots, “zombies, etc anyway because one would already know what one wanted to know.)

Analytic philosophers seem to like thinking in little equations, so let me phrase it in that form.

Let human consciousness = U (standing for unknown)

Let the kinds of reaction characteristic of a robot (or “zombie”) = R, where R is a known. (That is, let’s assume for the sake of argument that we can say pretty accurately what R is.)

So the question is: Is R the same as U?

Ridiculous question, is it not?  Totally unanswerable. How could we ever know, given that U is unknown?

So as I say, the whole line of enquiry is pointless unless one already knows U. And if one does, one has what one wants anyway.

DA


2016-10-06
RoboMary in free fall
Reply to Derek Allan
We know what consciousness is in the sense of what feature we are referring to. So that is not unknown, and so could not be the U in your equation. What is in dispute is the nature of the feature, which I assume is your U. 

Zombies (which in my opinion, would need to be imagined to be in a universe which has a physically different nature from the one physicalists imagine us to exist within in order to be compatible with physicalism) are just used to illustrate that certain hypotheses regarding the nature of U imply that a physical following the same laws of physics in the way same as it has been imagined that a physical has in this universe would result in conversations about about consciousness, even though nothing was consciously experiencing.

Dennett uses robots because as I quoted him in the original post stating "contemporary materialism-at least in my version of it-cheerfully endorses the assertion that we are robots of a sort-made of robots made of robots." So he is putting forward a hypothesis regarding U.     

The utility of examining the hypotheses for U is that you might be able to rule some out. For example here  http://philpapers.org/post/21350 I give an argument showing that if you consider that your conscious experience is evidence that reality is not a physical zombie universe then it is illogical to also believe what I outlined as the mainstream scientific interpretation of reality. It seems to me that that is quite a useful thing to realise. As you might not have realised that believing that your conscious experience is evidence that reality is not a physical zombie universe and believing in the mainstream scientific interpretation of reality involved a contradiction. 


2016-10-06
RoboMary in free fall
Reply to Glenn Spigel

RE: “We know what consciousness is in the sense of what feature we are referring to.”

What “feature” is that? How would you describe it? I assume you must be able to: it would be very odd to talk about a feature that you couldn’t describe. Especially since you then want to compare it to something else (the behaviour of robots, zombies* etc )  

DA

* The very notion of a “zombie” is absurd to my mind. It’s defined (by Chalmers et al) as a human minus consciousness. Given that we’re unable to specify what consciousness is, how could we ever possibly do that little subtraction? Many philosophers think they “know what a zombie is” because they’ve seen them in movies. (Remember? They’re the characters with strangely glazed looks, a slightly stiff gait, and a determination to destroy the world, including of course the hero and the heroine. Hollywood’s treasured contribution to philosophy… )


2016-10-07
RoboMary in free fall
Reply to Derek Allan
If you can understand how some atheists might expect death to be: a cessation of any conscious experience. Then it not being like that for you means that you are consciously experiencing. That might seem circular, but it just needs you to comprehend how many atheists expect death to be (what no after-life would be like). Consciously experiencing is the only evidence for reality that you have. If you have seen the film the Matrix, then there are some scenes which are supposed to indicate what the characters were consciously experiencing while plugged into a machine. It is the basis for all reasoning when using a phenomenological approach. 

The feature is not a feature which is in principle observable directly or indirectly from a third person perspective. At no point do I compare that feature to behaviours, I do not know how you managed to draw that conclusion.

The point of a zombie is that it behaves the same as a human but lacks the feature of consciously experiencing. I tend to conceive of them in a different physical universe from how this is imagined to be. The imagined physical in that alternate universe being of a different physical nature to the imagined physical in this universe, but following the same laws of physics. A difference in the physical nature in the alternative universe means that while the behaviour of the forms within it is imagined to be identical to the behaviour of the counterpart forms in this universe, none of the forms in the zombie universe have the feature of consciously experiencing. The human-like forms are zombies because what it is like to be one of them is like many atheists expect death to be like (the cessation of any conscious experience that accompanies a lack of after-life). One reason to consider the imaginary zombie universe is to highlight that consciously experiencing does not reduce to behaviour, another is that it allows you to consider, once you understand what a zombie universe is supposed to be like, whether you can state as a fact that reality is different from a zombie universe. 

Are you suggesting that you still cannot guess what feature I am referring to as consciously experiencing, or that you cannot imagine what I mean by a zombie universe? Or are you now able to understand the original post, and the inability of the physicalist philosophers to tell whether the robot in the third room (if it was considered to be consciously experiencing) would be having a conscious experience closer to their conscious experience of red, or closer to their conscious experience of blue. 





2016-10-07
RoboMary in free fall
Reply to Glenn Spigel

RE: “If you can understand how some atheists might expect death to be: a cessation of any conscious experience…” . 

Neither you, nor I, nor any atheist knows what death is – apart from the physical indicators. That “undiscovered country from whose bourn no traveller returns”, as Hamlet wisely said. So it would obviously be absurd to base any philosophical argument on what death may or may not be “like”.

RE: “If you have seen the film the Matrix,…”

Why, oh why, do so many philosophers – even quite well-known ones – seem to think that Hollywood is an authority on matters of importance to philosophy? It’s positively juvenile! Hollywood is an industry, and screenwriters have one aim and one aim only – to write a movie that makes money. Do you seriously think they give two hoots about making a contribution to philosophy!!?? (Or could do, if they tried?)  

RE; “Are you suggesting that you still cannot guess what feature I am referring to as consciously experiencing, or that you cannot imagine what I mean by a zombie universe??

Yes. More than suggesting. Stating, affirming. And I notice you have not told me what the “feature” you keep referring to is. If there is a “feature”, what is it? Describe it.

As for zombies, as I say, the very notion is philosophical nonsense – unless one could define consciousness (the element to be subtracted) in advance. And if you could do that, why bother about so-called “zombies”?

I have encountered so many attempts like yours on these threads to argue that we somehow just “know” what consciousness is. But then, when I ask what it is we know, the whole thing always goes up in smoke…

DA


2016-10-09
RoboMary in free fall
Reply to Derek Allan
I have tried to reference the feature, and have asked you questions which might enable me to guide you to it, but on the face of it you seemed unable to understand the questions. Here are a couple of examples:

Example 1: As a response to being asked whether you understood that some atheists thought there was no afterlife, and whether you could understand what those atheists imagined death would be like, you replied that no one knows what death would be like. But the question did not suggest anyone knew what death would be like, and there was no philosophical point being made which relied on anyone knowing. So your response was inappropriate. The question only required you to understand what those atheists believed death would be like, and that belief is one the vast majority of humans understand. Are you claiming that you do not understand how they imagine it to be?

Example 2: Regarding the film the matrix, you were asked whether you had seen the film, and understood that some scenes depicted what it would be like for the characters plugged into machines. Your response made it seem that you thought the question was about whether the film was trying to make any philosophical points. It was not. It was a simple question. Have you seen the film, and did you understand that some scenes depicted what it would be like for the characters plugged into machines? It was an key part in the plot, and I have yet to meet one person that did not understand what those scenes were supposed to depict.

Your replies could be taken to not reflect a lack of comprehension, but motivated by you trying to avoid admitting you have no response to certain philosophical arguments by claiming to not understand them. If you could not follow parts of a discussion that an average 10 year old could, then it would not be a refutation of the argument it would  just reflect *your* inability to follow it. Regarding the motivated response, to attribute such a motive to you would be to paint you as a pitiful character and so I will assume that there was not such motive behind your response. And based on that assumption, and since you are not an average 10 year old, but someone that has presumably done at least an undergraduate course in philosophy, I will try some other attempts at explaining what feature of reality is being referred to in the discussion. I assume you have come across the philosophical works of Berkeley, and if you did, did you manage to understand what he was suggesting reality was or were you about the only one in the class that could not?

Do you understand what a first person perspective is and do you think the majority of people believe a cup has one?

I understand you may have previously just responded quickly without giving too much thought to it, and misunderstood the questions, but this time, perhaps take some more care. There are four questions in this reply, perhaps you could make sure you feel that you understand them before giving a response, and ask for clarification if you feel that I have not been clear, because not that I mind trying to help you understand, but I do not want to just be wasting my time if you were motivated in the fashion that I hope you are not. Though presumably if you were, you would just look for a quick exit from the conversation before you started to look ridiculous.  

As for what you stated about zombies, as I have already explained there is a distinction between understanding what feature is being referred to, and knowing what gives rise to that feature. The point of the zombies is to examine some theories about what gives rise to the feature.  



2016-10-09
RoboMary in free fall
Reply to Glenn Spigel

RE: The question only required you to understand what those atheists believed death would be like, and that belief is one the vast majority of humans understand. Are you claiming that you do not understand how they imagine it to be?

I have no idea what atheists imagine death to be “like”. Why would we assume they all think alike anyway? More importantly, what possible difference can it make?  No one knows what death is “like”, so everything that anyone – including atheists – thinks, believes, imagines, supposes, guesses, assumes, whatever, is and can only ever be, 100% pure conjecture. I really don’t see 100% pure conjecture as a useful basis for a philosophical argument, do you?  (I’m putting “like” in scare quotes, by the way, because we have no way of knowing if it even makes sense to use the word in this context. It assumes there is something we can compare death to. Is there? Who knows? You don’t know, any more than I do.)

Re: ”Have you seen the film, and did you understand that some scenes depicted what it would be like for the characters plugged into machines?”

No to both questions. I stopped basing my thinking around Hollywood comic book ideas when I was about 10. I am not religious but 1 Corinthians 13:11 has a pertinent comment here.

RE: “Do you understand what a first person perspective is and do you think the majority of people believe a cup has one?”

I have no idea what the majority of people think about the “perspective” of a cup. I hope they don’t waste too much time on the subject though. I don’t.

But you still have not answered my question. You said: “We know what consciousness is in the sense of what feature we are referring to.” I asked you to specify what “feature” you had in mind, but all I get in reply are questions about what atheists might imagine about death, references to a Hollywood pulp fantasy movie I have not seen and hopefully will never see, and a question about the “perspective” of a cup. You do seem to be avoiding answering my question, if you don’t mind me saying so.

DA


2016-10-09
RoboMary in free fall
Reply to Derek Allan
I am not avoiding answering your question I am trying to guide you to what feature I am discussing. For example I asked you whether you could imagine what atheists that did not believe in an afterlife thought death would be like. But you stated that you could not. Had you of been able to, then I could have explained that the feature of it not being like that for you was what I was referring to. 
I asked you whether you understood what Berkeley was suggesting reality was, or whether you were the only one in the class that could not, but you did not answer. Could you please let me know your answer?

If you want me to try to have a direct go at explaining I will. Every experience you are aware of is a conscious experience. So if you are aware of anything you are consciously experiencing. That does not imply that you are aware of everything you consciously experience. With reference to Berkeley, his position is that there is no physical and that you are a mind. The reason I brought him up is that it avoids any confusion with regards to what is meant by what you experience. You could have otherwise thought that subconscious brain activity of the human form you experience having counted as the human experiencing things. So it would be useful if you answer that Berkeley question.  


2016-10-09
RoboMary in free fall
Reply to Glenn Spigel

RE: “Had you of been able to, then I could have explained that the feature of it not being like that for you was what I was referring to.“

Just to be clear, this was not what I said (if that’s what you’re implying). I said I had no idea what an atheist would think about death and I guessed that in any case they might not all think the same. I also pointed out that since we know zero about death (i.e. what “follows” it), to suggest it is “like” anything seems to be a misuse of the word “like”.

RE: “I asked you whether you understood what Berkeley was suggesting reality was, or whether you were the only one in the class that could not, but you did not answer.

My apologies. I overlooked this one. I was no doubt distracted by your suggestion that something sensible could be said about human consciousness on the basis of Hollywood juvenilia like the Matrix.

Re : With reference to Berkeley, his position is that there is no physical and that you are a mind.

It’s a long time since I read any Berkeley but is this really what the esse is percipi argument means? Not as I recall. I don’t think he denies the existence of material objects, if that’s what you mean by “there is no physical”. But in any case, I don’t see the relevance of this to our topic. (I had no intention of bringing up questions of the subconscious, by the way – not that I think Berkeley would be much use to you if I did.)  

RE: “Every experience you are aware of is a conscious experience”.

This just looks like tautology. Unless you are using the word in some special sense (and if so what?) “aware” means “conscious”. (E.g. “I was aware of the danger”; “I was conscious of the danger”. Same meaning.)  

Then you say: “That does not imply that you are aware of everything you consciously experience.” 

So here you seem to be using the two words in a different sense. Care to explain the difference?

DA


2016-10-10
RoboMary in free fall
Reply to Derek Allan
Berkeley does deny the existence of material objects. Are you claiming that you would not understand what Berkeley might mean by that?

Regarding the point that every experience you are aware of is a conscious experience, where were you thinking there was a tautology?  You being aware of the danger (other than a subconconscious awareness) means you are conscious of the danger. If you only had a subconscious "awareness" then you were not aware (which could be demonstrated in a scientific experiment).

Are you capable of understanding the following:

Here is a science fiction possibility discussed by philosophers: imagine that a human being (you can imagine this to be yourself) has been subjected to an operation by an evil scientist. The person’s brain (your brain) has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a super-scientific computer which causes the person whose brain it is to have the illusion that everything is perfectly normal. There seem to be people, objects, the sky, etc; but really all the person (you) is experiencing is the result of electronic impulses travelling from the computer to the nerve endings. The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to ‘see’ and ‘feel’ the hand being raised. Moreover, by varying the program, the evil scientist can cause the victim to ‘experience’ (or hallucinate) any situation or environment the evil scientist wishes. He can also obliterate the memory of the brain operation, so that the victim will seem to himself to have always been in this environment. It can even seem to the victim that he is sitting and reading these very words about the amusing but quite absurd supposition that there is an evil scientist who removes people’s brains from their bodies and places them in a vat of nutrients which keep the brains alive. The nerve endings are supposed to be connected to a super-scientific computer which causes the person whose brain it is to have the illusion that . . . (?)



2016-10-10
RoboMary in free fall
Reply to Glenn Spigel

Re: Berkeley does deny the existence of material objects.

Not as I recall. But this is a side issue.

Re: Regarding the point that every experience you are aware of is a conscious experience, where were you thinking there was a tautology?  

I thought that was clear. You said “Every experience you are aware of is a conscious experience.” . “Aware” and “conscious” usually mean the same, as I pointed out. It’s like saying every mistake is an error, or something equally uninformative.

RE; “imagine that a human being (you can imagine this to be yourself) has been subjected to an operation by an evil scientist. …etc“

Groan! Do I really have to engage in this childish silliness? Think about it! Something as hugely important as the nature of human consciousness is being discussed at the level of Hollywood comic book ideas. Why not Batman, Spiderman or something equally inane? 

Just tell me what conclusion you want to draw from it all. Spare me the tedium of thinking about it. 

DA


2016-10-10
RoboMary in free fall
Reply to Derek Allan
Regarding Berkeley you can read about him yourself http://plato.stanford.edu/entries/berkeley/ he does deny the existence of the physical. Are you capable of following his ideas? 

Have you ever read Descartes' Meditations on First Philosophy?

Are  you suggesting that you do not understand the idea of being aware of anything? Are you claiming that you are not aware of any sights, sounds, smells, thought etc.?

The quote was from Hilary Putnam.  A philosopher. It is not childish silliness. Did you understand it or not?

2016-10-10
RoboMary in free fall
Reply to Glenn Spigel

RE: “Are you suggesting that you do not understand the idea of being aware of anything?”

I don’t think you are understanding what I’m saying. I am simply pointing out that your statement “Every experience you are aware of is a conscious experience” seems to be a tautology. If we are using words with their normal everyday meanings, conscious and aware mean the same. So of course an experience I am aware of is a conscious experience. What else could it be? So my point is the comment gets us nowhere. It tells us nothing we don’t already know. Like all tautologies.

RE: “The quote was from Hilary Putnam”

What quote? About the Matrix??  If so, so much the worse for Hilary Putnam.  

By the way, you seem to be deeply impressed by what various philosophers have said. They're not gods, you know. Fine to read them, but why not work out what you think yourself and present your own arguments. (Though I recommend steering clear of childish comic book stuff stuff like zombies, brains in vats etc. Again, ponder 1 Corinthians 13:11)

DA


2016-10-10
RoboMary in free fall
Reply to Glenn Spigel
Dear Glen,
I sympathise with your championing of 'there is something that it is like'. It is something that all professional neuropsychologists recognise as a reality. Dennett has spent his life thinking he is championing 'contemporary materialism' but his materialism is like a doll that a child plays with until it learns a bit more about the world. It bears no relation to the view of biomedical scientists like myself. Dennett's position is a non-starter. Knock down refutations may be rare in philosophy but when you are shooting fish in a barrel, as we are with Dennett, they are two a penny. I am not sure it is worth spending the time to be honest.

One thing which may be important, though, is that I do not think you are right to suppose: 'Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc.' If RoboMary has anything to do with Jackson's Mary this is not the idea. If Mary has no RGB sensors she is not going to sense red under any model, but she may still 'know all the physical facts about red'.

Unfrtunately, contemporary philosophy of mind seems to be a dialogue between people who do not know enough about science to get the basic premises right. David Chalmers's tussle with Dennett was from a position almost as weak. So things went round and round in circles. The real issue is whether or not there can be some type of event either in a brain or a silicon chip that has the range of possible patterns that our experiences have. We probably need 100-1000 degrees of freedom for the event to cover all the experiences we can describe. 

If we say neither brain nor silicon chip has such individual events then we pretty much have to accept that there aren't such things as experiences of sunsets. We just have two types of machine cleverly designed to pretend so. We can be pretty sure there are no individual computational events in silicon with this number of degrees of freedom. However, in brains there may be. So the intuition that we can have experiences of sunsets but computers do not has a clear theoretical basis that can be tested, although doing so is extremely difficult.

Most neurobiologist's models assume that experiences are not individual events but banks of cell activations distributed in space. The problem with this is that it makes the causality non-local and both subjectively and computationally it crashes. So we have to look for more local events. I will not enlarge here but there are options.

The other thing about 'what it is like' is that it is not such an absurd phrase, even if it seems so. To be like something there has to be a possibility of comparison. There is no such thing as comparison in 'materialism'. We take it for granted and for sure there are ways of making the world function in a way that allows us to instantiate a comparison - like using a weighing machine. But there are no comparisons in physics itself. Silicon gates do not work by comparisons. They take part in algorithms that will serve the purpose but there is no event of comparison. In a neuron, however, an event of comparison may be a real option. But we need to work out exactly what the physical relation would depend on - perhaps symmetry of a field of potentials or something.

So there are some real scientific problems underneath all this. But I have given up taking note of the contemporary phil of mind debates. They miss the point. The seventeenth century natural philosophers were much more astute - Locke, Hobbes, Leibniz, Descartes. They, I find very worth reading.


2016-10-11
RoboMary in free fall

RE: I sympathise with your championing of 'there is something that it is like'. It is something that all professional neuropsychologists recognise as a reality.

Gee! That’s impressive! “all professional neuropsychologists.” I have never polled “all professional neuropsychologists” about the Nagel “there is something it is like” nonsense, so I couldn’t comment.

RE: “The other thing about 'what it is like' is that it is not such an absurd phrase, even if it seems so. 

Actually that’s not the phrase. You had it right the first time. The hallowed Nagel formula is “There is something it is like to be conscious”. The twisted syntax is important: it’s why so many people have let themselves be bamboozled into thinking it makes an important philosophical point when in fact it is entirely vacuous.  

DA


2016-10-11
RoboMary in free fall
Thank you for your feedback.

Regarding where I had mentioned 'Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc.' I had meant when RoboMary finally gets "her" colour cameras installed (analogous to when Jackson's Mary is allowed to leave the black and white room). I realise that was not clear though, and thank you for pointing that out. 

I had not realised that Dennett was regarded as such an easy target and not worth spending the time on, but might I then ask if you could perhaps look at  another argument I supplied, one which is aimed at what I believe to be the mainstream scientific interpretation of reality. I had posted it here http://philpapers.org/post/21350 but for your convenience will paste it in below:

Mainstream physics interpretation: The universe is a physical one and within it is either fundamental matter elements (strings or particles) and fields, or just fields. Whichever it is, the contents participate in making up forms which consciously experience and forms which do not.  A goal of physics is to represent within the physics model the features which directly influence how the fundamental matter elements or a field's likeness of them behave. Those features are the same regardless of whether the matter and/or fields are participating in the composition of forms which consciously experience or not. Therefore the laws of physics do not distinguish between whether the behaviour is taking place within a form which consciously experiences or not.

The problem: The problem with interpreting the evidence we have for reality like that can become apparent when considering when the matter elements(s) and/or field(s) participate in the non-consciously experiencing forms that it posits.  What the conscious experience of such a form is like cannot be one of the features directly influencing the behaviour of any field and/or matter element of the form composition,  because the form is not consciously experiencing, so there is no conscious experience to be like anything.  So if the belief was correct that the direct influential features, that physics is being interpreted as attempting to model, were the same regardless of whether the field and/or matter element was participating in a consciously experiencing form or not, then what the conscious experience was like could not be logically thought to be one of the features directly influencing the behaviour of any field and/or matter element participating in a consciously experiencing form either.

There is no indirect influence which does not directly influence anything.  But if what is consciously experienced is not directly influencing the behaviour of a single field or matter element as the belief has been shown above to imply, then what is it directly influencing (which is not a field or matter element) in order to influence the behaviour of our forms?

If what the conscious experience was like was theorised to have no potential direct influence over anything:  It could not even be an indirect influence to your behaviour.  That would imply that no consciously experience could ever act as evidence, because to act as evidence it would have to have had direct influence of some kind. So what you consciously experience could not act as evidence that reality is one in which forms have been consciously experienced for example. If you believed that what you consciously experience is evidence to you that reality is one in which forms have been consciously experienced, then it is illogical to hold that belief whilst also holding believing a story which implies that what is consciously experienced has no potential direct influence on anything.

Also if one were to suggest that the fields and/or matter elements consciously experience (in a panpsychic view for example) and that that feature is what one or more of the physics variables refers to, then, it seems to me, that there would still be the issue that the behaviour would reduce to what it was like to be individual fields and/or matter elements, which would be different from what it was like to be a certain arrangement. 




2016-10-11
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn,I think you are really enunciating the old problem of solipsism, and reasonably enough, but it should not be confused with 'the hard problem' which I think is a misconception.

The problem with 'mainstream physics' of the sort that Dennett believes he is championing is that it is the position of the naive schoolmaster and television physicist. Brian Cox admits to being a naive realist physicist for instance. The physicists who built the ideas of physics did not talk of a 'physical world' or a material world. Descartes meditations and Leibniz's Discourse on Metaphysics, and even Newton's first scholium were intended to point out that 'physical' is a meaningless circular term. Physics is intended to be the study of the regularities that determine the way the appearance of the world changes to us. It has to cover everything so there is no sense in distinguishing physical from 'non-physical'. That distinction comes from religious people who want there to be magic, I think.

But forgetting the 'physical' your account of the world being just the behaviour of fields and their modes of excitation seems fair. To be honest I am not quite sure what string theorists think they think but I am pretty sure it has nothing to do with there being 'little strings'. I wish the television physicists would not dumb everything down. But the key point is that physics is only interested in the causal powers of its entities, not 'what they are' or even 'what they are like' - for good reason because only causal powers can cause us to know about anything in such a way as to confirm predictions. Science has to have confirmable (or non-refutable) predictions.

So your focus on causal powers is on the nail. We then have the apparently knotty problem of which causal powers come with 'phenomenally'? - something along the lines of 'for which causal powers is there something it is like to be influenced by them'? I think there are two answers of interest - at extremes of a spectrum. 

I would first add that as far as I know there is nothing in a physics book that says some dynamic elements (e.g. modes of excitation of fields) experience and some do not. Some eminent neuroscientists seem to assume that but it seems to depend on the social norms of their parents. Panpsychism is the simplest position and nothing in physics that I know of (but see below later) challenges it.

The simple answer to the phenomenality issue is that phenomenality only ever applies to a direct dynamic relation - it is only ever totally proximal. If a photon from the sun strikes my eye it tells me nothing of what it might have been like to be in the sun. Whatever anything was like in an antecedent relation it is not like that here and now in this immediate relation of the present. In a sense it is a bit like an event being on next Tuesday. It adds nothing to the event, no extra causal powers, but it only applies next Tuesday. If you extrapolate from that you get the old solipsism problem - there can never be any evidence of any causal power due to phenomenality because it can never be documented other than here now. All we can say is that whatever phenomenality there is to a causal relation or action of causal power on an entity it adds nothing to the evidence of causal power we can document indirectly - just as happening 'next Tuesday' adds nothing to the event itself and for an owl with no calendar is an unknowable aspect of the event.

So there is absolutely no puzzle about the privacy of experience or whether other things have it. If phenomenality is always totally proximal, as our experience suggests, then there is no more to be said.

But there may be a caveat to this if we look at the way the dynamics of modes of excitation work. If phenomenality is totally proximal then an important inference is that whatever an 'I' or subject is, it is an individual dynamic unit with an experience based on a direct relation to a field of potentials. That in itself has huge implications that most neuroscientists are completely unaware of. Now, for most fundamental modes of excitation, their environment of potentials actually determines their identity - the solution to their 'wave equation' if you need a quantum formalism. For this to be the basis of the experience we talk about generates a possible paradox - as if something is 'informing itself' in a way that could not pass on causal relation. This may not be problematic but for models like that of Freeman and Vitiello I think it may be fatal. 

But for certain modes, such as acoustic modes, patterns in the environment with influences on the mode need not determine the nature of the mode, which is determined by a structural asymmetry of a body. That seems to mean these modes can have different experiences at different times. Whether that is important I do not now but it might be that for there to be something it is like to feel this way and then that way the subject has to be a very particular sort of mode. We may then have justification for allotting our sort of experience only to special events in the world. But it will always be on the basis of circumstantial evidence, just as I infer experience for you, and the solipsism problem will remain.

2016-10-11
RoboMary in free fall
Thanks again for the feedback, but I am not quite clear on what you were stating. The main thrust of the argument I supplied was that in mainstream scientific interpretation of reality any direct causes can be reduced to the fundamental features referenced in physics, which implied that what is consciously experienced was not an influence on behaviour, because it was not any of those features (those all being found in forms that do not consciously experience). While I did think I dealt with panpsychic views also, the argument was slightly different, and so it would be us useful for me to first understand your response with regards to the mainstream scientific interpretation of reality where a cup or certain subconscious brain activity for example is not considered to be consciously experienced. 

I can tell that reality is not a zombie universe, and I base my knowledge of that on the evidence of my conscious experience. So what I consciously experience is having an influence on me (I am using it as evidence). Do you dispute that conclusion? 

Your answer seemed to me to suggest that you were considering what is consciously experienced to have no influence (which is what I was considering to be the problem with the mainstream scientific interpretation). The reason it seemed to me like that is because I thought you were suggesting that like future events have no influence on current events, what is consciously experienced has no influence on any current events, the direction of causality being in the other direction, e.g. current events influence future events, and the current proximal dynamic determines what is consciously experienced. But that would seem to me to be incompatible with me being able to tell, based upon what I consciously experience, that it is not a zombie universe. Because for me to be able to base any understanding on what I consciously experience would mean that what I consciously experience does have an influence on my behaviour. Perhaps you might be able to clarify whether I have understood you correctly.

Also as a side issue, regarding the a distinction between the physical and non-physical which might appear in some non-physicalist ontologies you mention that you think it "comes from religious people who want there to be magic...". I am not clear what you mean by "magic". Presumably you mean something other than fundamental properties that are by definition irreducible, because every physicalist ontology I am aware of, including the mainstream scientific interpretation of reality has fundamental properties, and presumably you are not considering them to be "magic". But if not irreducible features of reality, what features of their account were you considering as a claim of magic?


2016-10-11
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn, 
Glenn: in mainstream scientific interpretation of reality any direct causes can be reduced to the fundamental features referenced in physics, which implied that what is consciously experienced was not an influence on behaviour, because it was not any of those features (those all being found in forms that do not consciously experience).

Jo: This oversteps what physics says. It does not say that what is consciously experienced is not an instance of operation of one of the causal dispositional properties that figure in the language of physics for distal events. Physics for distal events has to use a language separate from phenomenality but it does not say there is no experience. So if a field of potentials X is causally coupled to a mode of excitation Y (a subject of experience) then there is no reason why X should not be 'like a sunset' for Y. The default, parsimonious position, taken by Locke, is that we should assume that. However, X might only be like a sunset if X and Y are of a very special type - rather as suggested in my previous post. X might be a field of electrical potentials coupled piezoelectrically to a complex acoustic mode Y in a way that never really arises outside biological excitable tissue.

Another point is that phenomenality does not seem to appear in physics textbooks but it does, right at the beginning. A primer in physics that defines a force will say it is a push or a pull - which are experiences. A book on optics will indicate that photons of a certain wavelength give us an experience of red. So all physical properties are defined as causal dispositions - tendencies to cause P, if you like. And if you chase P far enough it will always be an experience. So that is why we know we are not in a zombie universe. Here and now, P is manifest to us. Without the experience P physics is a story of dispositions to dispositions to ... on for ever to nowhere.

Magic basically means breaking rules. What a magician does is magic because the rules do not allow ladies to be sawn in half and put together again. I have just been reading Catherine Wilson's 'Leibniz's Metaphysics' which has come out in a new edition. I highly recommend it because in the seventeenth century people like Descartes, Locke, Hobbes, Leibniz and Cudworth had sensible discussions about these issues without the confusion of the modern jargon. The sticking point was really whether God was a force that could break the rules by throwing the odd thunderbolt at someone who was sinning or whether God WAS the rules and operated through regularity. The conservative religious establishment has always wanted a God that breaks rules because that allows them to threaten the flock with retribution. But Enlightenment thinkers wanted God to be the rules - just as awesome but reasonable and reliable. So in philosophy 'non-physical' really means no scientist can predict it because it is due to a magician God.

There is another distinction of relevance though. Mental events cannot be placed in space because there are no internal sensors with moving parts that can track our brain events in space. There are internal clocks that can track them in time, but not space. This means that mental events such as thoughts or ideas can be catalogued in time - I thought about the sea this morning - but not space. I do not know if I thought about the sea in my occipital lobes or my frontal lobes. This has led to the bogus proposal that thoughts are not in space but only in time and so are in that sense non-physical. It relates to the dictionary definition of physical as that which we can access through our senses - it is an epistemological category, not an ontological one.


2016-10-12
RoboMary in free fall
Sorry for the late reply, but I am only permitted to post twice a day, which is why I have restricted my responses (I have not yet responded to Derek Allen for example, because he has a tendency to not answer the questions, and so I have to post multiple times asking the same thing, effectively wasting my limited posts, and so I have restricted my replies (no offence was meant to him)).


RE: 
Glenn: in mainstream scientific interpretation of reality any direct causes can be reduced to the fundamental features referenced in physics, which implied that what is consciously experienced was not an influence on behaviour, because it was not any of those features (those all being found in forms that do not consciously experience).

Jo: This oversteps what physics says. It does not say that what is consciously experienced is not an instance of operation of one of the causal dispositional properties that figure in the language of physics for distal events
Physics for distal events has to use a language separate from phenomenality but it does not say there is no experience.


I was not commenting on what physics states, only on what the mainstream scientific interpretation reality assumes it suggests (which would be different from a panpsychic interpretation, which like an idealist one, would still be compatible with discoveries in physics).

Neither was I suggesting that physics states there is no conscious experience. It is simply silent on the matter. The point I was making was stated in the argument:

"The problem with interpreting the evidence we have for reality like that can become apparent when considering when the matter elements(s) and/or field(s) participate in the non-consciously experiencing forms that it posits.  What the conscious experience of such a form is like cannot be one of the features directly influencing the behaviour of any field and/or matter element of the form composition,  because the form is not consciously experiencing, so there is no conscious experience to be like anything.  So if the belief was correct that the direct influential features, that physics is being interpreted as attempting to model, were the same regardless of whether the field and/or matter element was participating in a consciously experiencing form or not, then what the conscious experience was like could not be logically thought to be one of the features directly influencing the behaviour of any field and/or matter element participating in a consciously experiencing form either."

That argument is regarding the mainstream scientific interpretation of reality. The causal features of any cognitive state that a person might have which could be regarded as a disposition to behave in a certain way could be reduced to causal features found in forms that were not consciously experiencing. A simple version of the argument, viewing things at the atomic level, would be as follows:

If 

(1) all atoms in a form that does consciously experience, would behave the same if individually they had the same surroundings in a form which does not. 

and 

(2) The reasons for the behaviour would be the same in both cases. 

then 

(3) What the form was consciously experiencing is not a reason for any atomic behaviour. 

because given (2) the reasons for each atom's behaviour are the same reasons as when in a form that is not consciously experiencing. 

I am considering (1) and (2) to be premises of the mainstream scientific interpretation of reality.  Just to be clear (1) is referring to direct surroundings of the individual atoms, such that the chemistry of a hydrogen atom will be the same whether it is in a form that consciously experiences, or whether it is in a form that does not for example. Chemical reactions taking place in your body could take place in a laboratory outside of any consciously experiencing form (just to be clear, I am assuming the mainstream scientific interpretation here). I was not suggesting that the mainstream scientific interpretation of reality explicitly states that what is consciously experienced is does not influence the behaviour, rather I was suggesting that it implies it by those premises. It was an implication that I had assumed some holders of the viewpoint might  not have realised.

Regarding the issue that I can tell that reality is not a zombie universe, and I base my knowledge of that on the evidence of my conscious experience. I stated that I could conclude from that that what I consciously experience is having an influence on me (I am using it as evidence). You did not mention whether you dispute that conclusion? 

Regarding the magic issue, Chalmers makes the comment in his paper "Consciousness and its place in nature":
---
Nevertheless, quantum mechanics seems quite compatible with such an interpretation. In fact, one might argue that if one was to design elegant laws of physics that allow a role for the conscious mind, one could not do much better than the bipartite dynamics of standard quantum mechanics: one principle governing deterministic evolution in normal cases, and one principle governing nondeterministic evolution in special situations that have a prima facie link to the mental.
---

And as I understand it Hameroff and Penrose suggest that microtubules are sensitive to quantum events, and could influence neural firings, which would be the kind of structure that a dualist or idealist might expect the brain to contain. My point is that what you regard as magic would seem to depend on what you thought the rules were. And I am not clear why some conceptions of what the rules are should be regarded as "magic" but not others. By the way, I was not commenting on Hameroff and Penroses' assumptions about how the relevant quantum events might come about.


2016-10-12
RoboMary in free fall
Reply to Glenn Spigel
Following on from my previous post, there were a couple of things I was not clear about regarding what you were stating.
 
I did not understand your claim that "all physical properties are defined as causal dispositions - tendencies to cause P, if you like. And if you chase P far enough it will always be an experience. " At least not in the light of a mainstream scientific interpretation of reality. If I was to imagine a chemical reaction happening in some distant star that was never observed by anything conscious, then in what way would the physical properties of the matter and/or fields taking part in that given chemical reaction be thought to be giving rise to an experience. Where you considering some panpsychic view, or is there something I am missing there?

Nor did I fully understand your comment linking predictability to magic. I have mentioned in my previous post the microtubules mentioned by Hameroff and Penrose, and I am not sure why if the subject stated what they were going to will, it would be any less predictable which range of firing patterns would occur if God caused the firing as opposed to if the firing was influenced by quantum randomness. If it was more predictable would you suppose that it was less magic than quantum randomness? 


2016-10-12
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I would forget about a 'mainstream scientific interpretation'. There is no such thing. There is just gossip by television personalities and soft articles in Scientific American and such. The people who taught me science would just smile wisely at the concept. It is a windmill not worth tilting at. No serious thinker belongs to it. There is no mainstream scientific view that attributes consciousness to some things and not others - that is a man in the street view. I am sure there are jobbing scientists who take this view but it is not scientific because it is untestable.

The random non-deterministic element of quantum theory is part of the rules. And as Leibniz guessed it has to be part of the rules if you have discrete dynamic entities (like modes of excitation) operating with continuous rules in four dimensions. There is no alternative to a degree of randomness. That is not magic. Magic is having a set of rules and saying that somehow, through some 'will', God or souls can break the rules. It is hopeless because if rules do not apply you cannot make predictions and test models. And we have no reason to think it is needed.

I do personally think one needs to consider things at the quantum level to make any headway with consciousness but not because of random tweaking. Experience has to be totally proximal as far as we know and totally proximal means a relation at the fundamental level - not some nominal relation of aggregates like the collection of atoms in a billiard ball moving the collection of atoms in another. Fortunately modern condensed matter physics allows you to analyse in quantum level terms at macroscopic scales. (Feynman said that in his lectures in the 1960s but nobody noticed.) 

(Derek seems to be a court jester whose king died a while back.)

Read Leibniz - it is all there!

Best wishes

Jo

2016-10-12
RoboMary in free fall
Reply to Glenn Spigel
Thanks for the second post, Glenn,
OK, the disposition is defined as a tendency to cause experience P if you assume that a human observer is placed in the appropriate position at the appropriate time. 

And of course for cosmology much of the time we make observations on aggregates of events and interpolate - like 'observing' dark matter. Dark matter makes the galaxies look too thin for the amount of gravitational force or something found to be around (I never really understood it). All observation is inference from indirect evidence. We observe electrons just as much as we observe elephants. 

But the main point is that a dynamic disposition is ultimately defined in terms of what a human would feel if put in a position to sense what is happening, however indirectly through satellites or electron microscopes.

2016-10-12
RoboMary in free fall

RE: "(Derek seems to be a court jester whose king died a while back.)

Indeed. Oh for an interlocutor who is willing and able to analyse propositions carefully! Someone, for example, would see the obvious problems is saying: "Every experience you are aware of is a conscious experience". Or “there is something that it is like” … is something that all professional neuropsychologists recognise as a reality.”

I don't need a king though; just a philosopher... 

DA


2016-10-13
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

 

Thank you for this post. Like you, I’m a qualia fan. However, I do not understand how your thought experiment should convince Dennett (or any other physicalist). Take for example the functionalist or the crude behaviourist. They would say that experiencing the colour red is responding in such and such way to a red stimulus in the subject’s environment. Therefore, they would say that the robot in the third room is experiencing neither the colour red nor the colour blue (as we do). Q.E.D.

(A different functionalist might say that when the receiver gets the signal from the first room the robot experiences the colour red, and when it gets if from the second room it experiences the colour blue)

Maybe I miss something here. Still, it is not clear to me why in this thought experiment you don’t simply beg the question against the physicalist.

Yours,

Amit


2016-10-13
RoboMary in free fall
Dear Jonathan you wrote:


Dear Glenn,I would forget about a 'mainstream scientific interpretation'. There is no such thing. There is just gossip by television personalities and soft articles in Scientific American and such. The people who taught me science would just smile wisely at the concept. It is a windmill not worth tilting at. No serious thinker belongs to it. There is no mainstream scientific view that attributes consciousness to some things and not others - that is a man in the street view. I am sure there are jobbing scientists who take this view but it is not scientific because it is untestable.


I would assume that the view that test-tubes do not consciously experience is a widely held view amongst physicalists whether they are scientists or philosophers or neither. I think that panpsychism (the only physicalist alternative view I can think of) is a minority view. That is why I am labelling that view (the view that test-tubes for example do not consciously experience) as the mainstream physicalist view.  You yourself later clarified regarding your statement   "all physical properties are defined as causal dispositions - tendencies to cause P, if you like. And if you chase P far enough it will always be an experience" that you meant it only in the sense that if there was eventually a human observer (or other consciously experiencing observer I assume). You did not seem to be suggesting that the chemicals in the chemical reaction in the far distant star that I used as an example when questioning your statement were the holders of the experience.  Anyway the belief that I am labelling as a mainstream physicalist belief (that things such as stones do not consciously experience) is what the main argument is against and so far you do not seem to have come up with a refutation of the argument. In fact you do not seem to have directly addressed it at all. If that is because you consider that I have made the argument that the view I was attacking was implausible, but just regard it as shooting fish in a barrel and not something of wide interest, because you hold a panpsychic then please make that clear, and then I will happily move on and explain further my attack on the panpsychic viewpoint.

Also, you have still not answered the question I have twice asked you before: I can tell that reality is not a zombie universe, and I base my knowledge of that on the evidence of my conscious experience. I stated that I could conclude from that that what I consciously experience is having an influence on me (I am using it as evidence). Do you dispute that conclusion?  

2016-10-13
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 

You commented:
...I do not understand how your thought experiment should convince Dennett (or any other physicalist). Take for example the functionalist or the crude behaviourist. They would say that experiencing the colour red is responding in such and such way to a red stimulus in the subject’s environment. Therefore, they would say that the robot in the third room is experiencing neither the colour red nor the colour blue (as we do). Q.E.D.

(A different functionalist might say that when the receiver gets the signal from the first room the robot experiences the colour red, and when it gets if from the second room it experiences the colour blue)

Regarding the first point. You make it sound as if a response can be to change what is being meant by consciously experiencing.   But the argument was in response to Dennetts comment that:

Robinson (1993) also claims that I beg the question by not honouring a distinction he declares to exist between knowing "what one would say and how one would react" and knowing "what it is like."  If there is such a distinction, it has not yet been articulated and defended, by Robinson or anybody else, so far as I know.  If Mary knows everything about what she would say and how she would react, it is far from clear that she wouldn't know what it would be like. 

Dennett is not claiming to not understand what Robinson meant by "what it is like". And the objective was just to make it clear that knowing what the robot would say or how it would react does not entail knowing what it would be like to be the robot. Claiming not to understand what was meant, or giving a response that does not address the issue is not a counter to the argument.

Regarding the second point, I assume you are just doing the same thing again, using the words to mean different things, and therefore not discussing the same thing. Again changing the subject is not addressing the point that was being discussed. Neither is claiming ignorance of what is being discussed.






2016-10-13
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I respect the thrust of your argument so far, but there are a few niceties of mine you have ignored as well!

So yes, I consider a mainstream physicalist view as beneath consideration if it assumes that test tubes have no experience. It cannot really be a mainstream SCIENTIFIC view because it is an untestable view and science has to be testable.

But I do not have to be a panpsychist to deny this approach. As a scientist I would have to be neutral forever because there can never be empirical demonstration of something that is always proximal in an event that is distal. But as a 'natural philosopher' I would like to be bolder and commit myself to seem sort of opinion on what is likely. My opinion is that any interaction between universe and dynamic unit may well be 'manifest' to that unit in some way we cannot describe further. However, as indicated in my note about acoustic modes there is a serious metaphysical possibility that even the barest manifestation can only occur when the influencing field does not define the very nature of the dynamic unit. This gets into really hairy physics but it may provide us with a clear cut metaphysical distinction between modes in cells that can have the world manifest to them and modes for which this has no meaning. So I do not need to be a panpsychist.

I think the problem may be that you are assuming the influence of, say, the valency electrons of a copper ion in a test tube on nearby molecules has to be the same as the influence of such electrons on complex acoustic modes in cells. My understanding is that influences are intrinsically constituted both by what influences and what is influenced through what may be a very specific coupling mechanism. So it may be that phenomenal experience only occurs with certain types of coupling - maybe piezoelectric coupling for instance. And it may not be manifest as anything we could vaguely recognise as an experience unless the coupling involves a complex set of parameters related to a complex acoustic mode. 

So we have more than two possibilities.

But I am interested to hear the attack on the panpsychist viewpoint. That sounds more interesting. 

2016-10-14
RoboMary in free fall
I have already attempted to explain that by the term "mainstream scientific interpretation of reality" I do not mean what science implies about reality, instead I mean what the majority interpret science as suggesting. Though on reflection I think it would be better written as "mainstream interpretation of science by scientists", even though the interpretation would not be restricted to scientists, but instead is currently (I think) shared also by the majority of philosophers.

You mention that you "think the problem may be that you are assuming the influence of, say, the valency electrons of a copper ion in a test tube on nearby molecules has to be the same as the influence of such electrons on complex acoustic modes in cells." Which makes me think you may not have understood the argument. The issue was that with the mainstream interpretation of science by scientists the fundamental variables (those not considered to be reducible to other variables) in the equations are considered to represent features of the underlying that have a direct  influence of behaviour. And those features are considered to be the same regardless of whether the equation is being applied to something forming a consciously experiencing thing, or something forming something which is not consciously experiencing. So whatever feature of a quark the spin (ignoring for now whether in some theories that feature might be reducible) refers to for example is the same feature regardless of whether the quark is participating in a form that consciously experiences or not. So it is not considered that when the quark is participating in forms that do not consciously experience, the spin is considered to refer to one feature, but when the quarks are participating in forms that do consciously experience it refers to another. So the feature is not thought to refer to any feature of what-it-is-like to be a quark. Therefore no feature of what-it-is-like is being considered to be influencing the behaviour in a consciously experiencing form, as the features of the underlying that are considered to be a direct influence are members of the set of features considered to directly influence things that are not consciously experiencing. So it does not matter whether it might be claimed that a field only has a certain form in things that do consciously experience, it only matters whether the field's behaviour can be described using variables that refer to features that can be found in things that do not consciously experience. So the complexity of the equation does not matter, only what features its variables refer to.

Regarding the pansychic view, unless it is stated that some of those fundamental variables refer to features of what-it-is-like to be the underlying, the problem would be the same as that described above. If they are considered to refer to features of what-it-is-like to be the constituent, then the direct influences can be reduced to those what-it-is-like features, and what-it-is-like to be you is not one of those features. Since those directly influential features would be found in much simpler forms. This is not to assume that what-it-is-like to be you could not arise through some behaviour (though I have another argument regarding the implausibility of the neural states having the symbolism that they do if reality was a physicalist one. If you would be interested, I could maybe post it on a different thread). It is simply that what-it-is-like to be you would not be a feature that is a direct influence. The influences on behaviour could be reduced to the direct influences represented by the fundamental variables in the equation used to describe it, none of which refer to what-it-is-like to be you. You could break the equation down to understand the features that were considered to be a direct influence on behaviour (such as what-it-was-like to be a quark with a spin of 1/2).

2016-10-14
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,The problem is that the interpretation you are calling mainstream has nothing to dow with science, which is about testable ideas. And I am not at all sure that most scientists would go for it. Most would raise their eyebrows and say 'pass'. Those that think about such things would point out that we cannot know, so the interpretation has nothing to do with science - as I have.

What worries me about twentieth century philosophy of mind - still going on it seems - is that it is bogged down in straw men like this. In the seventeenth century people had a much wider perspective.

I don't think anyone is suggesting that 'a feature of what it is like is influencing'. As I see it we are interested in what it is like to be influenced. (So the spin of the quark may affect what it is like for some other dynamic unit to be near the quark, not what it is like to be the quark - that would not make sense to me.) Being like something is not some new dynamic power or influence - I take it to be what we treat it as - what it is like to be proximal to a physical dynamic influence. Nothing distal is like anything to me and only I can judge, so I think it is a category mistake to try to attach what it is like to distal things and worry if it makes difference.

Which means that there is no reason to think that what it is like to feel like being me in a world is not what it is like for a dynamic unit to be influenced according to the laws of physics. Something needs to make it feel like that and why not physical influences? What it is like is not the same thing as the influence, it is what it is like to be influenced. We have to have a completely different language for that because of ascertainment problems but there is nothing to indicate the two languages are not covering the same causal interactions. Everything we know from science indicates that what it is like to be a human subject is what it is like for some dynamic unit to be influenced by the world via various brain events. The interesting problem is what dynamic unit is the human subject for which it is like that.

2016-10-14
RoboMary in free fall
Reply to Glenn Spigel

Thank you, Glenn. I think that I understand your position now.

Correct me if I’m wrong, but you do not hope to attack the physicalist by the thought experiment. The thought experiment is rather directed against someone like Dennett, who allegedly respects Robinson’s/Nagel’s claim that “there is something which is like to be X”, but does not think that there is a distinction between knowing how X would react, and knowing what it is like to be X. Is that right? If this is correct, then what does your thought experiment add to Nagel’s bat? After all, given that the physiology of bats is known, we do know how a bat would react (under certain conditions), but presumably we don’t know what it is like to be a bat. If your thought experiment undermines Dennett’s position, then so does Nagel’s.

Regarding the second point- I agree with you that the behaviourist actually changes the subject. We’re on the same side. However, the behaviourist claims that she doesn’t change the subject. This is where the debate on consciousness got many years ago. The reductionists (be them behaviourists, functionalists or what have you) are blamed for changing the subject, but stick to their claim that they don’t. Now, you and I need arguments to show that they change the subject. Simply claiming they do so would not work.

Yours,

Amit

2016-10-15
RoboMary in free fall
You are not sure whether the majority of scientists think that it is not like-anything-to-be a test tube? I was assuming that the vast majority did not, and that was one of the reasons there were no moral pressure groups about the morality of placing them in bunsen flames. But since neither of us have done a survey, I guess we can put the issue aside, since it is only a labelling issue.

You stated in your response:
Something needs to make it feel like that and why not physical influences? What it is like is not the same thing as the influence, it is what it is like to be influenced.

I agree that what-it-is-like need not be thought of as an influence, the influence could be thought to be reducible to other features, such that if this universe were imagined to be a physical for example, it could be suggested that what-it-is-like is not itself an influence but instead is just what-it-is-like to be influenced by other features. But the point was that if the feature of what-it-was-like was not itself an influence then what-it-was-like could not act as evidence. 

The post before last I wrote:

Also, you have still not answered the question I have twice asked you before: I can tell that reality is not a zombie universe, and I base my knowledge of that on the evidence of my conscious experience. I stated that I could conclude from that that what I consciously experience is having an influence on me (I am using it as evidence). Do you dispute that conclusion?  

But you did not answer that time either.. Are you suggesting you cannot tell whether reality is a zombie universe or not?

2016-10-16
RoboMary in free fall
Reply to Amit Saad
Yes you are correct that the original argument in this thread was not a general attack on physicalism (even the issue that I am discussing with Jonathan Edwards on this same thread is not, but that is an attack on a quite widely held physicalist perspective). I think the argument is similar to the bat argument. But a person might think that they could work out the experience the Mark 19 robot would have had with the colour vision cameras in (a functionalist might think they were able to). It then seems to follow if people could, then why not RoboMary. And if RoboMary could, then why could it not be worked out even when only having the black and white cameras in (which is what I was assuming Dennett was claiming). Furthermore if RoboMary could have worked out what it would be like to consciously experience blue, when only having had black and white cameras, then RoboMary would have worked out what it would be like to experience something she had not previously experienced. Which would then bring in doubt the claim that it would be impossible to work out what a bat would consciously experience, just because we lack the experience. Why could we not work it out like RoboMary? Dennett is not stating how RoboMary could work what the experience of blue was like, but seemed to be challenging others to show that she could not. The argument I supplied just showed a simple scenario in which it can be seen that in room 3 there would be no basis to assume the experience being more like your experience of blue or more like your experience of red. You had touched upon that the response could be that the experience would change between red and blue, but remember the robot could not report any difference in experience, and so if it were claimed that in the third room the experience changed from red to blue, then consider the following statement by Dennett:

Thinking in terms of robots is a useful exercise, since it removes the excuse that we don’t yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgment. If materialism is true, it should be possible (“in principle!”) to build a material thing–call it a robot brain–that does what a brain does, and hence instantiates the same theory of experience that we do

But if the robot brain did instantiate the same theory of experience that we do as Dennett suggests and yet it was be unable to respond to the change of experience from red to blue suggested, then that would suggest that we, like the robot, cannot respond to what we consciously experience. Which I think was the point that zombie arguments were suggesting regarding some physicalist theories.  So the thought experiment opens up attacks on different points depending on the response given, which I think is different from the bat argument. 

By the way do you dispute the following:

You can tell that reality is not a zombie universe based on your knowledge that you conscious experience, and you can conclude from that that what you consciously experience is having an influence on your behaviour (you are using it as evidence in your claim that reality is not a zombie universe).

Also regarding zombies, and this is just a pretty unrelated side issue, only being mentioned to avoid the topic changing to whether the idea of them is compatible with physicalism:  they could be imagined in a universe which is different physically, but which follows the same laws of physics, such that any experiment would give the same results. This would just be to conceive of them in a way which is not incompatible with physicalism (no suggestion of two physically identical things having different features).

Regarding your point about the behaviourist or functionalist that claims they do not change the subject. If they are simply stating that if something has certain features (functions in a certain way for example) then it will also have the feature of consciously experiencing, then that is fine, they can still follow the conversation though and they can understand the point in the argument. 

However if they suggest that you did not mean the feature you did by consciously experiencing but other features, then you are in a position to correct them, the same as any other time a person might misunderstand what you meant. For example if you felt like indulging them, you could listen to the features they thought you meant, paraphrase those features back to them, just to allow them to be clear that you understand the features they are talking about, before informing them that you are discussing another feature. They can claim to be ignorant about which feature you are discussing, but why should their ignorance be your problem? Indeed if they were suffering from some type of Emperor's New Clothes Syndrome where they thought it was clever to see it as though the feature was absent you could well be wasting your time while they remained inclined to think it was clever to claim to see it that way. I would be interested in what you make of the conversation I was having with Derek Allen on this thread. Anyway, while I accept that it could be a problem if you were in an academic institution where you might be relying on such people to understand an argument which contained the concept of consciously experiencing, other than that, they are just sidelining themselves by claiming they cannot understand what obvious feature of reality other people were discussing. It seems to me it would be strange for them to think they could join in the discussion while claiming to be ignorant of the feature the people were discussing. They could just be ignored while the conversations are carried on with those that do understand. You can claim they are ignorant of what you are discussing, but they cannot claim the same of you in return regarding what they are discussing (that it entails more than your paraphrasing of the discussion). 

Though, I am not sure that these latter type of behaviourists or functionalists are mainstream. I assume they misunderstood the mainstream position. But that is because I have assumed mainstream behaviourists are of the mind that in a scientific context they should limit themselves to discussing behaviour, because that is what they consider to be the subject matter of science, and therefore in a scientific context the term consciously experiencing should take on a behavioural meaning. In the same way that words can take on different meanings in different contexts. That would be changing the subject though if that meaning were used in response to a philosophical argument which was not using that meaning. And I assume a functionalist would normally advocate functionalism as an explanation of the type of behaviour which gives rise to the feature. And so would not be a denying that they understood what feature they were suggesting certain functional behaviour would give rise to.  




2016-10-16
RoboMary in free fall
Reply to Glenn Spigel
Re Dennett’s “Thinking in terms of robots is a useful exercise, since it removes the excuse that we don’t yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgment. If materialism is true, it should be possible (“in principle!”) to build a material thing–call it a robot brain–that does what a brain does, and hence instantiates the same theory of experience that we do.” 

This is patent nonsense. His admits that we “we don’t yet know enough about brains to say just what is going on,”* yet in the same breath says “it should be possible (“in principle!”) to build a material thing – call it a robot brain – that does what a brain does”. 

So although we don’t know what a brain does, we can still go ahead and build one. This is the kind of absurd statement that made me drop Dennett’s book after about 5 pages. It always amazes me that so many philosophers take him seriously. Yet they do…

(* The “that might be relevant” is just a distraction: clearly we're not interested in what isn't "relevant". And relevant to what anyway?) 

 DA

2016-10-16
RoboMary in free fall
Reply to Derek Allan
I think by "in principle!" Dennett just means current pragmatic considerations aside (such as not knowing how to, and technological considerations etc. ) . With regards to the relevancy issue, I assume he meant what it was about brains that was relevant to us consciously experiencing.

You did not mention in your previous reply to me whether you understood the "science fiction possibility" being described by Putnam. Just to be clear, I am not asking whether you can see why whether you can understand it or not is relevant to the conversation we were having, I am just asking whether you understand it. For your convenience I will post it again:

Here is a science fiction possibility discussed by philosophers: imagine that a human being (you can imagine this to be yourself) has been subjected to an operation by an evil scientist. The person's brain (your brain) has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a super-scientific computer which causes the person whose brain it is to have the illusion that everything is perfectly normal. There seem to be people, objects, the sky, etc; but really all the person (you) is experiencing is the result of electronic impulses travelling from the computer to the nerve endings. The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to  'see ' and  'feelthe hand being raised. Moreover, by varying the program, the evil scientist can cause the victim to  'experience'(or hallucinate) any situation or environment the evil scientist wishes. He can also obliterate the memory of the brain operation, so that the victim will seem to himself to have always been in this environment. It can even seem to the victim that he is sitting and reading these very words about the amusing but quite absurd supposition that there is an evil scientist who removes people's brains from their bodies and places them in a vat of nutrients which keep the brains alive. The nerve endings are supposed to be connected to a super-scientific computer which causes the person whose brain it is to have the illusion that . . . (?)


2016-10-16
RoboMary in free fall
Reply to Glenn Spigel

RE: I think by "in principle!" Dennett just means current pragmatic considerations aside (such as not knowing how to, and technological considerations etc. 

The “in principle” doesn’t save him in the least. If anything it makes his position worse because he’s saying: OK, we mightn’t actually be able to do it, but we know what we are aiming to do. Yet he has just said: “we don’t yet know enough about brains to say just what is going on”. It’s an obvious blooper. His book is full of them.

As for the relevancy point, you say,I assume he meant what it was about brains that was relevant to us consciously experiencing”. Of course he must mean something like that. But again that only makes his position worse. He presumably means the brain in all its aspects – not excluding consciousness.

The man is a fool – a plausible, glib fool, but a fool nonetheless.

Re the brain in vat thing. What is the point of asking me if I understand something if I think that something is philosophical nonsense – the stuff of comic books on a par with zombies, Superman, Batman et al. Of course, I understand what's being said; I just think it is vacuous Hollywood silliness. Why would I want to try to imagine vacuous Hollywood silliness? I gave that up at about age 10, along with playing cowboys and Indians.

DA


2016-10-16
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,OK we can forget about whether scientists are not panpsychists. My point was that whether they are or not has nothing to do with them being scientists. If they are Buddhists they may be likely to be. If they are in the US, where many people are Christian theists, they are likely not to be. I don't see any moral arguments. Test tubes may just love being in bunsen flames. 

I see no argument for what it is like not being evidence. Our scientific evidence is entirely based on what it was like to see the dial or hear the crackle of the Geiger counter. This is what I find som odd about contemporary phil of mind debates. People have lost sight of the fact, well understood 300 years ago, that all science is based on making use of experience as evidence. We have to convert that experience into a form that can be carried over usefully next time, so we have clocks and rulers. But at the time of observation we have always, and will always rely on what it was like. The golf standard for defining a physical world has to be a world in which there are regularities of disposition to engender certain experiences under respecified standard conditions. That does not mean there is no mind dependent world, just that the physical description of it is always anchored in what it is like.

I guess I thought it was clear that I would agree that what you consciously experience is having an influence on some dynamic unit that we can call 'you' for the time being, although that raises a number of major issues down the line. But note that the influence is from whatever X (maybe a field of potentials) it is that determines the pattern of your conscious experience. I see no reason why X should not be an influence specified by physics. The only reason why it is like something in a way that events in test tubes do not seem to be for you is that it is a totally proximal relation to 'you' whereas the heat on the test tube is not.

The whole mystery of the 'hard problem' is resolved by the simple and familiar fact that the only dynamic relations or influences that are like something to M are those which are totally proximal to M. That proximal what it is likeness is built into the entire edifice of physical science, as indicated above.

But that entails that we can never tell whether or not everyone else except this 'me' is a zombie or not. We can only tell that 'I' am not a zombie. It is just that it would seem an extraordinary coincidence if other people use the same language for what things are like, even in novel situations where this is not just an established convention, if they are zombies.

2016-10-17
RoboMary in free fall
You state:
I guess I thought it was clear that I would agree that what you consciously experience is having an influence on some dynamic unit that we can call 'you' for the time being, although that raises a number of major issues down the line. But note that the influence is from whatever X (maybe a field of potentials) it is that determines the pattern of your conscious experience.



But I do not see how that works, and I will explain why. You seem to be of the mind that the set of features that have a direct influence are the features referenced by the variables in the physics equations, and what-it-is-like to be you is not one of those features. As I understand it you are not suggesting that the variables refer to any what-it-is-like feature. So you are not thinking that the spin of a quark refers to some feature of what-it-is-like to be a quark. However you also know what you are consciously experiencing and so are of the mind that what-it-is-like to be you is influencing your behaviour. It seems to me that you then attempt to patch over that contradiction by suggesting that those features that are influential by behaving dynamically in some way determine what-it-is-like. So what if they do, that does not make what-it-is-like an influential feature. You could imagine a physical universe with a different underlying physical than the one imagined but which shared the same influential features which influenced behaviour in the same way (so there would be the same laws of physics), but in which a similar proximal dynamic happening would determined what-it-is-like to be different. The behaviour would be the same in both cases, and could be reduced to the same influences. The reason for imagining the different universe, is not to show anything about our own, it is just to highlight what seems to me to be a clear mistake in your reasoning. Which is thinking that influential features giving rise to epiphenomenal features allows the epiphenomenal features themselves to be thought of as causal.  You seem to do this in the first sentence I quoted by conflating the feature of consciously experiencing with the influential features you imagine to determine that feature. But the behavioural reaction in the account is in response to those features, not the feature they determined. And for an account to be compatible with you reacting to what you consciously experience it needs to have you reacting to feature of what you consciously experience, not other features.  

Now you might respond that a sphere would have the feature of being spherical which is determined by the influential features of its constituents and that being spherical is regarded as an influential feature. But being spherical is a conceptual feature, much like being a pump for example. While the conceptual category the object would fall into would not be reducible to the constituent parts of the pump or the sphere (there being no signs of the conceptual category there), any behavioural influence that the arrangement of the constituent parts would have would be.  Consciously experiencing a sense of understanding the concept of a sphere might be thought of as a feature of certain proximal dynamic happenings (such as certain neural activity rather than the object conceived of as being spherical), but the conscious experience of what-it-is-like to be you having such a sense of understanding would not be a feature that is reducible to the features which are being considered to be a direct influence.This is not to suggest dualism since it is not suggesting that the features (influential or otherwise) are not features of a physical underlying, it is just that feature of what-it-is-like to be you are not implied by the influential features the variables in physics refer to, whether those variables referred to what-it-is-like features or not. If the variables did not refer to what-it-is-like features, then the feature does not imply anything about what-it-is-like (thus you can imagine an alternative physical universe as I did above). If the physics variables did refer to a what-it-is-like feature (if the spin of a quark referred to a feature of what-it-was-like to be a quark for example) then as I mentioned in a previous post:

If they (the physics variables) are considered to refer to features of what-it-is-like to be the constituent, then the direct influences can be reduced to those what-it-is-like features, and what-it-is-like to be you is not one of those features. Since those directly influential features would be found in much simpler forms. This is not to assume that what-it-is-like to be you could not arise through some behaviour.... It is simply that what-it-is-like to be you would not be a feature that is a direct influence. The influences on behaviour could be reduced to the direct influences represented by the fundamental variables in the equation used to describe it, none of which refer to what-it-is-like to be you. You could break the equation down to understand the features that were considered to be a direct influence on behaviour (such as what-it-was-like to be a quark with a spin of 1/2).   

2016-10-17
RoboMary in free fall
Reply to Derek Allan
Regarding the Putnam quote I supplied, you stated: 

Of course, I understand what's being said.

Well where Putnam stated:  

Moreover, by varying the program, the evil scientist can cause the victim to  'experience'(or hallucinate) any situation or environment the evil scientist wishes.

The term "consciously experience" as I am using it in the original post can considered as a synonym for the 'experience' expression as Putnam was using it. And the term "what it is like" refers to what the conscious experience is like, and in the Putnam imagining the evil scientist was imagined to be able to determine what it would be like.

2016-10-17
RoboMary in free fall
Reply to Glenn Spigel

RE: Regarding the Putnam quote I supplied,…

Good grief!  You probably told me, but I’d forgotten that Putnam wrote that guff about an “evil scientist” and a brain in a vat. Long time since I’ve read any of him, but I gave him credit for more sense.

As for the rest, let me be clear. When I said “Of course, I understand what's being said” I meant I understood the words – the Hollywood scenario being described. Philosophically, I think it is pure, unadulterated, juvenile nonsense. Every bit as vacuous as the Dennett twaddle I’ve just drawn your attention to.

DA


2016-10-17
RoboMary in free fall
Reply to Derek Allan
Language and words develop generalized senses and lose the original specific meaning they once indicated. Experience, awareness, and consciousness originally had very different connotations, and often very different meanings. Just because in our generalized polyglot please-everyone speech, we have merged their meanings into utter blandness through repeated misuse does not make it impossible to pry the terms apart and make note of their original senses or of their possible new specifications, which would allow us to carry this often-pointless discussion a little further along.

GN 


2016-10-18
RoboMary in free fall
Reply to Gregory Nixon

Hi Greg

RE: “Language and words develop generalized senses and lose the original specific meaning they once indicated. Experienceawareness, and consciousness originally had very different connotations, and often very different meanings. Just because in our generalized polyglot please-everyone speech, we have merged their meanings into utter blandness through repeated misuse does not make it impossible to pry the terms apart and make note of their original senses or of their possible new specifications, which would allow us to carry this often-pointless discussion a little further along.”

I don’t think it’s usually a question of misuse or original meanings. Most languages, including English, have words with overlapping and sometimes almost identical meanings (hence the category: synonyms). The words awareness, and consciousness (and their cognates) are near synonyms. Thus, to say, for example: (as e.g. Glenn did at one point): “Every experience you are aware of is a conscious experience" is to define “conscious” in terms of a word that means pretty much the same…

As for "experience”, it’s a real trap word and many, many philosophers fall straight into it. Chalmers, for example, defines consciousness by calling in aid the idea of “experience”. But an obvious and enormous question is immediately begged. When we use the term (human) experience, are we not already implying some form of consciousness? Answer: Who knows? Unless we clarify what we mean, we may well be. So the term “experience”, as it stands, without further definition (which would presumably need to avoid invoking the idea of consciousness) is a very weak reed indeed to lean on.

I sometimes have to pinch myself to remember that all this needs to be told to “analytic” philosophers (I’m talking generally here – it’s not aimed specifically at anyone on this thread). The analysis required to see these problems is basic, elementary.

And analysis of this kind is by no means “pointless” as you suggest. One does not need to be an analytic philosopher (and I am not) to recognise that unless one is keenly aware of the meanings of the words one uses, and the context in which one is using them, one immediately ceases being a philosopher and becomes a mere wordsmith – like Dennett, for example.

DA


2016-10-18
RoboMary in free fall
Reply to Glenn Spigel

Glenn,

 

Thanks! Now I see why you’ve put the argument as such. It is indeed a very nice argument. Still, I think that you miss the functionalist’s position, and hence miss Dennett’s possible reply.

The more I think about it, it seems to me that the functionalist would say that the experience of the third robot changes from red to blue. True, the robot will not be able to report any change in experience, but this does not mean that there is no such change. The functionalist would say:

a.     If we examine the robot’s reactions to different objects when the receiver gets stimuli from robot 1, we’ll see that it can distinguish red objects from grey ones.

b.     If we examine the robot’s reactions to different objects when the receiver gets stimuli from robot 2, we’ll see that it can distinguish blue objects from grey ones.

Therefore:

c.     The robot’s experiences are changing.

In sum, the functionalist’s reply would be that the experience is changing (based on the subject’s actions), even if the subject does not notice the change.

  -----------

On a side issue, you ask me about the following statement:

You can tell that reality is not a zombie universe based on your knowledge that you conscious experience, and you can conclude from that that what you consciously experience is having an influence on your behaviour (you are using it as evidence in your claim that reality is not a zombie universe).

I don’t know whether it is correct. Notice, that even if we think of experiences like Nagel or Jackson, still we don’t need to accept the claim that what one consciously experiences has an influence on one’s behaviour. We can hold epiphenomenalism regarding experiences, and hence reject the claim of influence on behaviour.

Personally, I would like to stick to the causal closure principle, if possible.  

-------------------
Regarding the “changing the subject” question-

I think that the behaviourist/functionalist would not try to correct you and say that “you did not mean the feature you did by 'consciously experiencing'”. The behaviourist does not care about what you mean. Her claim is that if you use English expressions according to their ordinary meaning, then ‘feeling pain’ means behaving in such-and-such way in such-and-such circumstances. ‘Being conscious of X’ means behaving in such-and-such way in such-and-such circumstances, and so on. (I’m talking here about behaviourist, but the similar claims hold for any other form of reductionism).

So the behaviourist’s claim would be that you and I change the subject, when we say that English expressions about consciousness refer to something different from mere behaviour (think of Ryle as a proponent of such a view). They would not argue to be ignorant.

--------------------- 

Regarding your discussion with Derek-

My humble opinion is that Derek is right for requiring clarifications. However, it is unfair to blame you for using tautologies as definitions (Every experience you are aware of is a conscious experience). In a sense, every proper definition is a tautology (a triangle is a plane figure with three straight sides, is one example). I guess you and Derek should start from some agreed fundamental notions, which will be used for clarifying your view.

I hope this helps in a way.

Best,

Amit


2016-10-18
RoboMary in free fall
Reply to Glenn Spigel
My dear friends,To argue over minutiae in the sensible world no matter how diverse the population it inhabits or how variegated its kinship, misses every conceivable point about reality, consciousness, awareness, and represents an entire waste of your valuable time and obvious mental capabilities.
Reality can not be described from the bottom up, but must begin from the top down. Start with the macro-level of first causes and the composition of the universe in its entirety, then work your way back down to what we humans can perceive, feel, think, and so forth.
Quit pretending that you know nothing and are discovering the world anew.
Just a thought from a metaphysician and Spinoza author.
Charles M. Saunders

2016-10-18
RoboMary in free fall
Reply to Glenn Spigel
I would like to address the original thought experiment of the three robots.  As stated, for any given robot there is an arbitrary mapping of channel a, b, and c (lower case to avoid confusion between channel b and color Blue) and color R, G, and B. For robot 1 the mapping is a->R, b->G, c->B.  For robot 2 the mapping is a->B, b->R, c->G.  However, we are given no information as to robot 3's mapping, or even if there is such a mapping.  So in effect, you're connecting arbitrary outputs from two different systems to an arbitrary input of a third system.  I don't see how you can draw any conclusions from that set-up.

On the other hand, if you put RoboMary in that third room, she would know the mapping of both robots 1 and 2 as well as her own, let's say a->G, b->R, c->B. Let's say you query her before turning on the wireless feed.  Even though up to now her channels have always been equal (x,x,x) she would know that if she was looking at a red box her channels would be 0,255,0. And so if you ask her what will happen when you hook her up as robot 3 she would say "well, it won't be like what robot 1 sees, because in that case my channels would be 0,255,0.  And it won't be like what robot 2 sees, because in that case my channels would be 0,0,255.  Instead, the weird way you have me set up will be like seeing green all the time. Except because you have decoupled my sensory apparatus from my experience apparatus, I would essentially be a very simple brain in a vat.

So given the above, could you explain the original point?

*


2016-10-18
RoboMary in free fall
Reply to Amit Saad
Hi Amit
RE: In a sense, every proper definition is a tautology (a triangle is a plane figure with three straight sides, is one example)

In what sense is your definition a tautology? If I am explaining geometry to students who (let’s say) don’t know what “triangle” means, and I give them your definition, am I simply repeating the same information twice and not telling them anything they don’t already know? They have learnt a new thing, have they not – what the word triangle means?

By contrast, if I say to someone “Every experience you are aware of is a conscious experience”, and I accept that “aware” and “conscious” mean much the same, I am in effect saying “Every experience you are conscious of is a conscious experience". That is a tautology. The listener is told nothing new.  Imagine a scientist addressing a group of his colleagues and saying: “Great news! I have discovered a new element! It’s called novium. I define it as novium”. His audience is likely to lose interest rather quickly, don’t you think?

DA






2016-10-18
RoboMary in free fall

RE: Reality can not be described from the bottom up, but must begin from the top down”

Curious. I've never thought of reality having a “top” and a “bottom”. Did I miss the labels - "This side up”?

DA


2016-10-19
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 

Regarding your answer and the changing experiences, are you suggesting that the functionalist would be suggesting that the qualia would change from between red  and blue but the robot would not notice a change from red to blue, or are you using a different meaning for the word experience?

Regarding your belief that epiphenomenalism is a plausible position: would that not imply that nothing you consciously experience could act as evidence? If so, then are you suggesting that you cannot tell that reality is not a zombie universe?





2016-10-19
RoboMary in free fall
Hi James, 

Sorry if it was not clear but the robot eyes in the third robot switch between the signals received for A, B and C from the two sets of cameras that are switched between. So the signal it receives from the room with the RGB cameras looking at the red cube are channel A = 255, channel B = 0, channel C = 0, and the signal it receives from the BRG cameras looking at the blue cube are also channel A = 255, channel B = 0, channel C = 0. Which is why it states:

The processing would be the same in each case, as in each case the channel values for the box pixels (assuming no shading) would be channel A = 255, channel B = 0, channel C = 0.


Hopefully that has is now clear.




2016-10-19
RoboMary in free fall
Reply to Glenn Spigel

DearGlenn,

I respect your attempt to disentangle this but you need to read what I actually say.  I AM suggesting that there is something that it is like for an influence given in a physics equation as a value of a variable to affect some human subject ‘me’. There is no ontological category of ‘what it is like features’. Something being like something to me is an epistemic or relational aspect based on total proximity to ‘me’. I realize that there is a potential confusion in that we use the phrase in ordinary language to say 'that looks like a banana to me', referring to a distal ‘object’. I am assuming that we accept that as Newton says the yellow that it is like is a phantasm of colour produced in the mind and that we are interested in the proximal relation within the brain the involves something being like yellow.

 

I think you may have got confused by my denying that the spin of the quark relates to what it is like to be a quark. What I meant here is a denial of Russellian monism. Russell made the peculiar suggestion that phenomenal features are truly intrinsic to the elements whose relations are described by physics. Yet everything we know about phenomenality and the way we use phrases like what it is like indicate that it is a relational feature. And specifically the spin of the quark will relate to what a quark may be like to something else, with the only case ‘I’ can know of being what the spin of the quark is like to me. Russell had already established that knowledge has to be based on causal relation, so that all physics can deal with is causal relation, not quiddity. So he should have known that something being like something must involve a causal relation – he shoots his own argument in the foot.

 

You raise the issue of  a possible universe with a different ‘underlying physical’ (presumably implying physical quiddity or intrinsic nature). Chalmers raised this with schmass rather than mass. But as Russell rightly pointed out this concept of physical quiddity is meaningless. Physical either means described by physics, which specifically excludes quiddities, or something that is (indirectly) like something to us through our senses (the dictionary definition). A lot of people think there is some other sense of ‘physical’ that means stuffness or quiddity but if you try and think what it might be, as Kant points out, all you ever do is draw metaphors from what the world is like to us. The concept of a ding-an-sich is empty. So there are no arguments about ‘what it is like features’ not being ‘physical features’. That is the sort of confusion that present day philosophers have got themselves into because they have not read the early moderns carefully. Chalmers’s Hard Problem – and here I agree with Dennett but for other reasons, is an admission of not understanding what physics is about, a symptom of the disconnection between science and an alternative poetic discipline now called philosophy.

 

To suggest that the influences that are like something to us cannot be those found in physics texts seems totally implausible. Physics sets out to catalogue the regularities in all causal relations we can know. The only causal relations we can categories are those that, indirectly, are like something to us. So causal relation A to B can be catalogued because of a chain A to B to C to D to ... W to X where X is a human subject (some dynamic element of a brain) and W is like something to X. We have no means of interrogating W to X because we have no sensors that can track over our own brain events. But we have sensors that can track over A to B to C by making inferences from the way W to X relations are like something to ‘me’ – making use of the assumption that U to V to W to X relations in the brain follow stereotyped rules that we can bracket out.

 

So we arrive at a language of how causal relations link up distally that necessarily makes no reference to what W is like to X but there is no suggestion that these need to be different sorts of relations. And since we can only know then because they are parts of chains A to B to C ... to W to X it seems reasonable to assume, as all the early moderns did, that they are at least part of the same system of relations. Descartes was worried that the proximal relation W to X could not be mechanical in the way that he thought A to B was but within twenty years of his death  the work of people like Hooke and Wren had shown the way to understand that nothing is mechanical in Descartes’s sense and that, as Leibniz formulates most clearly, all immediate relations are non-mechanical – as modern physics confirms.

 

There is simply no puzzle here. Just a lot of closeted academic philosophers chasing their tails.


2016-10-20
RoboMary in free fall
Hi Jonathan, 

Thank you for trying to help me understand your position, and I admit I am struggling slightly.

You write: 

But as Russell rightly pointed out this concept of physical quiddity is meaningless. Physical either means described by physics, which specifically excludes quiddities, or something that is (indirectly) like something to us through our senses (the dictionary definition). A lot of people think there is some other sense of ‘physical’ that means stuffness or quiddity but if you try and think what it might be, as Kant points out, all you ever do is draw metaphors from what the world is like to us.

I understood that Berkeley also pointed out that when people try to describe any features of the material world that their imaginings drew upon metaphors from what the world is like to us. I understood that Berkeley was suggesting that the conceptions that the materialists had, regarding features of any entity in their imagined material world, were empty of any meaning outside of what it was like to be us. I however think it goes too far to suggest that what they were suggesting was meaningless since they could outline the relationship between the entities that they thought to exist without the need to having a clear idea of the features of those entities. As I understand it (I am quoting from someone quoting...)  Russell argued that:

the physical world is only known as regards certain abstract features of its space-time structure—features which, because of their abstractness, do not suffice to show whether the physical world is, or is not, different in intrinsic character from the world of mind.

And also wrote that:

Physics is mathematical not because we know so much about the physical world, but because we know so little: it is only its mathematical properties that we can discover. For the rest, our knowledge is negative. . . . We know nothing about the intrinsic quality of physical events except when these are mental events that we directly experience . . . as regards the world in general, both physical and mental, everything that we know of its intrinsic character is derived from the mental side.

And from that it seems to me that Russell was not suggesting that the physical world lacked intrinsic nature or quiddity as you are referring to it, only that we were ignorant of the intrinsic nature that the mathematics was modelling. What I quoted you as writing was directly after you wrote:

You raise the issue of  a possible universe with a different 'underlying physical' (presumably implying physical quiddity or intrinsic nature). Chalmers raised this with schmass rather than mass. 

It seems to me that you are going further than Russell, and are denying that physical entities can be imagined to have any intrinsic character which we are ignorant of, but which can be represented in physics equations as variables, and the relational effects of which can be described in the relationships between the variables in the physics equations. Yet at the same time you seem to think of the spin of a quark as being a causal feature of a quark, albeit presumably only being thought to be known of indirectly through effects it has on the quark's relationships. The reason I find this strange is that I was assuming that a description of the fundamental features that an entity has would be a description of the intrinsic qualities of the entity. The better these features were known, the better the description would be. Your writing seems to indicate that you think of the spin of a quark as a causal feature of the quark, while accepting that your knowledge of the feature is limited to its effects on the quark's relationships, yet at the same time you seem to be denying that the quark has any intrinsic nature which seems to deny that the quark having that causal feature is part of its intrinsic nature.  If you had simply meant that if the features were understood to be the same then in what sense would the intrinsic nature be thought to be different, then that would not apply to the alternative universe scenario that I gave, because the features were not the same. Also it seems to me that even if all the features you were aware of were the same it does not rule out there being features which you were not aware of which could be different. For example consider a physicalist theory in which at the Big Bang two universes were created. One having a majority of matter and a minority of anti-matter the other a majority of anti-matter and a minority of matter. The physics in both would be the same as I understand it, in the sense that physicists in each could consider the universe they existed in to be the one with the majority of matter and the minority of anti-matter. They would not differ in their awareness of the features of what they referred to as "matter", but would you be suggesting that this would therefore mean that there was no physical difference between what was referred to as "matter" in the two universes? If so, then it would seem to entail that you think there is no physical difference between matter and anti-matter in this universe. 

You mention that you consider physics to deal with causal relations, so are you not considering the spin of a quark to be a feature of the quark which can at least partially determine how it relates to other entities? I only ask this because I was assuming that you were not thinking of the spin being a feature of the relationship, as the more common position I would have thought was that it was a feature of the entity which is used to explain regularities in the entity‘s relationships. But if you were considering it to be a feature of the entity that along with other causal features determine the relationship, and were interpreting the other variables in the physics equations in a similar way, then are the variables not representing the causal features of the entities in your ontology. If so, then could they not be considered to make-up the membership of the set of causal factors in your account. On the other hand, you do not seem to consider what-it-is-like to be a feature of the entities within your ontology, but instead to be a feature of the relationships between them and so do not seem to imagine it to be represented by the variables in the physics equations, instead it being represented somehow in the resultant relationship determined by the causal features the variables represent.  

Also when you wrote:

Something being like something to me is an epistemic or relational aspect based on total proximity to 'me'.

I am not sure what you were referring to as 'you'. Is an atom in your leg a part of 'you' or is that something which 'you' have an epistemic or relational aspect based on its total proximity to 'you'?

Thank you for your patience in explaining your position to me. 

2016-10-20
RoboMary in free fall
Reply to Glenn Spigel

Dear Glenn,

That is a good set of queries. We are getting to the nitty gritty.

 

Yes, I do go beyond Russell because he wants phenomenality to be a non-relational quiddity. I follow Leibniz who I think gets all this right. Russell was raised on Kant, who muddled up Leibniz. Russell gets interested in Leibniz but never quite sees how his system works.

 

There can be confusion because properties of tokens and types need to be distinguished. A type of dynamic unit like a quark has a relational dispositional nature that includes spin. A token quark has a history of relating to the world via that disposition. Token relating to world is seen as an extrinsic feature (like being an uncle) by philosophers. But the dispositional powers of a type of unit might be called intrinsic even though they are relational – i.e they have no meaning other than in the context of a relation. Leibniz regards dynamic powers like spin as intrinsic. What may be confusing is that Leibniz claims that a power is never unfulfilled – there is always an actual relation if there is an actual unit and it is completely fulfilled from the outset. This needs a lot of unpicking but looks as if it ends up being right (maybe forget that for now!)

 

The difficulty is that a lot of philosophers want powers to be ‘underpinned’ by a quiddity that is not in itself the power (and therefore not relational) but what gives rise to it – like schmass. However, in physics, as Russell points out, the concept of mass is purely a concept of power, not of quiddity underlying a power. The joy of Leibniz is seeing that the power and its fulfillment are the realities. Nothing beyond that makes sense. Kant seems to have backtracked to say that there is some underlying hidden ding-an-sich but people vary on whether they read him as saying there really is such a thing or not. Russell makes the odd move of making phenomenality or ‘what it is like’ the quiddity. But we are already thinking of dispositional power rather than actual relation to world and dispositional power would seem to be constant, so a quiddity underlying it would be expected to be unchanging – nothing like experiencing a rich world. If anything experience ought to be the fulfillment of the relation – at the other end of the spectrum of properties. That is what Leibniz says it is and I go with that.

 

So I agree with Berkeley that any materialist’s or Kantian’s attempt to conceive of something knowable or unknowable that material stuff might be like in itself will always just draw on experience and is empty. The beauty of Leibniz’s position is that the nature of the thing in itself is totally knowable – it is the dynamic disposition that we can describe in great detail with the maths of physics. That is the reality. That is what our senses represent for us. It is just that our language has got stuck with ‘things’. Most philosophers today seem to be stuck with the man in the street as ‘thing-ists’. I recommend jumping out of the plane with Leibniz and becoming a pure dynamist because the parachute works beautifully and the result is much more exciting. And the knowable dynamic dispositional nature of elements of the world cannot be Kant’s ding-an-sich because it is entirely knowable – it is what knowing is all about.

 

With respect to the difference between matter and anti-matter I am pretty sure that physics would say they are in no way physically different – unless someone has found some asymmetry, which would of course render the metaphor unhelpful. We see this in electron pairs. The two electrons allowed to fill a particular orbital in an atom are opposite in spin but because there is no fixed reference for spin you cannot say one has positive spin and the other negative. There is no physical property you can say one has and not the other. The same would be true of positive and negative charge I think in your example. You cannot take a handful of positrons from one universe and check them out in another so there is probably no way of saying that your two universes are different. This is one of the aspects of present day physics that bears Leibniz out so well. Reality is just doing, not being. Whenever two modes of doing are indistinguishable they have to be regarded as the same. This is the PII that turns out to work so elegantly in quantum theory. There is no such question as to whether it is electron A or electron B that occupies an orbital. The occupied orbital is the dynamic entity – the doing, free of individual quiddity beyond that.

 

When I mentioned a ‘me’ earlier I noted that this also needs a huge amount of unpacking. By a me I mean a human experiencing subject. Since Ryle at least it has been popular to insist that a human subject is a ‘whole person’. But there is no concept in science of a whole person and I doubt there is a coherent concept outside science. As Descartes pointed out, a human subject more or less has to be a dynamic unit operating in some small part of a brain that receives the inferences about the world that are derived by other parts of the brain. It gets words rather than just stimulations of individual cochlear hair cells, colours indicating reflectance dispositions rather than just pixels on a retina. It pretty much has to be anterior to the primary sensory cortices.

 

But Descartes big mistake I think was to assume that there is only one experiencing ‘me’ subject in each brain. Neurology makes it much more likely that there are millions of them and my particular interest is in the idea that each one occupies a dendritic tree in an individual neuron. This may sound weird but it is cold-blooded neuroscience.

 

So an atom in my leg would not be part of a human experiencing subject. (Atoms are actually quite rare in our environment. Mostly the levels go from subatomic particle to molecule. The only dynamically independent atoms around are probably the odd argon or neon floating through.) And the human subjects whose experiences we talk about are not in legs. There may be experiencing subjects of all sorts in legs but we have no idea what it might be like to be them, in the sense of being the recipients of influences from other dynamic elements.


2016-10-20
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

I’m sorry, I wasn’t accurate enough. I should’ve used quotation marks. We’re talking about two different claims:

A.      A triangle is a plane figure with three straight sides.

B.    B.  ‘Triangle’ means a plane figure with three straight sides.

A is a plain tautology. B, on the other hand, is not. Actually B is not even a necessary statement. Of course, B would add new information for someone who doesn’t know what “triangle” means in English. The same holds for Glenn’s definition of ‘conscious experience’ (‘Conscious experience’ means every experience that you are aware of). If you don’t know what Glenn means by “conscious experience” this should give you some new information. If you do know what he means, then there’s no need in clarifying the issue.

In any case, all of this is just a side issue. The question Glenn wishes to discuss is whether there’s a distinction between knowing what it is like for someone to have a certain experience (say to see red), and knowing how one would react under certain conditions. He presents, I believe, an interesting argument which deserves a serious discussion.

Yours,

Amit


2016-10-20
RoboMary in free fall
Reply to Glenn Spigel

Glenn,

You ask:

Regarding your answer and the changing experiences, are you suggesting that the functionalist would be suggesting that the qualia would change from between red  and blue but the robot would not notice a change from red to blue, or are you using a different meaning for the word experience?

Notice that you talk about experiences and qualia interchangeably. However, many functionalists would reject this sort of talk (and might even reject the use of the term ‘qualia’).  The view I have considered is the crude version of functionalism. According to this view there is a conceptual relation between experiencing a blue object and the blue object itself. In your thought experiment the third robot observes two different objects (the red and the blue cube) based on the condition of the switch. Hence, the crude functionalist would say that the switch changes the robot experiences. Now, you might say in reply, that it is part of the meaning of seeing red and blue, that a subject must be able to identify such a change in the perception. If this is correct, then you can take it as an argument against crude functionalism/behaviourism.

The functionalist, I believe, may have two possible replies (I think that Dennett suggests the two of them in his writing). First, the functionalist may reject your claim that subjects must identify such changes in experiences (see Dennett’s Quining Qualia). Second, the functionalist may present a more sophisticated version of functionalism. Dennett distinguishes between the personal level and the subpersonal level. In your thought experiment, on the personal level the robot observes two different objects (red and blue cubes). On the subpersonal level (i.e. robot’s brain) the robot’s functions are identical regardless the switch’s state. A more sophisticated functionalist may argue that the experience of “seeing red” is explained by referring to the functions of the subpersonal level. Hence, the sophisticated functionalist may argue that the experiences of the robot do not change by the switch. (Please check Dennett’s paper Toward a Cognitive Theory of Consciousness, regarding this point).

Best

Amit


2016-10-20
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn,

In fact I did understand your scenario, but apparently you did not understand my response, so let me try again.

In the scenario as described, the source of the signal to channel A of robot 3 is arbitrary, and thus irrelevant. It makes no difference whether it comes from robot 1, robot 2, or simply a source that constantly outputs the number 255. How it is experienced by robot 3 depends only on how robot 3's channels are mapped to colors, and the scenario doesn't provide that information.  So of course no one can say if robot 3's experience is like experiencing red or blue or green.

But RoboMary is explicitly described as having all the information, including her own mapping, and so she will know exactly which experience is like or unlike her own. So given these mappings:

Robot 1: RGB
Robot 2: BRG

RoboMary's response will be one of three possibilities, depending on her own mapping:

1. RGB or RBG: "My experience is like Robot 1's, but unlike Robot 2's"
2. BRG or BGR: "My experience is like Robot 2's, but unlike Robot 1's"
3. GRB or GBR: "My experience is unlike Robot 1 or 2. My experience is like looking at a green box."

Does that help?

*

2016-10-20
RoboMary in free fall
Reply to Amit Saad

Hi Amit

Re: (‘Conscious experience’ means every experience that you are aware of). If you don’t know what Glenn means by “conscious experience” this should give you some new information.

What would that new information be?

Re: ”In any case, all of this is just a side issue. The question Glenn wishes to discuss is whether there’s a distinction between knowing what it is like for someone to have a certain experience (say to see red), and knowing how one would react under certain conditions. He presents, I believe, an interesting argument which deserves a serious discussion.

If there's a tautology in an argument, it’s hardly a side issue.

As for the other argument, as soon as I see signs of the Nagel nonsense (“something it is like”), I lose interest. Any argument relying on that is doomed to go nowhere.

DA


2016-10-21
RoboMary in free fall
RE: "And the human subjects whose experiences we talk about are not in legs. "

That might explain why people who lose their legs can still think, talk etc. 

Wonderful thing philosophy!

DA

2016-10-21
RoboMary in free fall
Hi Jonathan, 

Thanks again for your patience, but I am still not sure if I am understanding you. You mention types and tokens, and I am considering you to mean a token as an instance of a certain type. You state that "A token quark has a history of relating to the world via that disposition. Token relating to world is seen as an extrinsic feature (like being an uncle) by philosophers." I am not sure what you mean by that. Are you stating that for any given quark the spin is not seen as an intrinsic feature of the quark which could be used to explain the regularities in its relations? This may seem like a strange question given that when discussing quarks as a type you do seem to be of the mind that the spin is an intrinsic feature of the quark type.  If you were meaning that the spin was an extrinsic feature of a quark instance, then what instance would the spin be an intrinsic feature of (this assumes an ontology of instances which have features, please correct that assumption if it is wrong)?

Regarding your response to the matter and anti-matter universe, I am still not clear on that either. You presumably are of the mind that in this universe matter and anti-matter are different. What I am having a problem in understanding is why you are suggesting that they would not be different if they were in different universes.You recognise that the physicists would be unable to tell which of the universes they were in, and therefore seem to conclude that because they were unable to tell the difference that it must mean that there was no difference. But that seems to me to be erroneous thinking, since in the thought experiment I have told you that what the physicists in one are calling matter is what the physicists in the other are calling anti-matter, and as long as you are of the mind that in this universe matter is different from anti-matter, then I fail to see how you do not recognise that a difference is implied. It seems to me similar to a situation where there are two rooms, one in which there is a box containing a small cube which is referred to as an "A box" and in another a box containing a small sphere which is referred to a "B box", and a person being in one of the two rooms and only being able to look at the box and being asked whether the box is an A box or a B box, and because there was no visual distinguishing feature, them declaring that there could be no difference between an A box and a B box because they were unable to tell. I do not wish  to change the thought experiment really, and was only mentioning the second one to highlight the type of erroneous thinking that seems to me being used with regards to the first one (the majority matter and majority anti-matter universes). That a person could not tell is the point, but with the thought experiment you are told what the difference is, and as of yet, I have not understood you to be declaring that there is no difference between between matter and anti-matter in this universe. So it seems to me that you are recognising a difference but somehow thinking that the difference depends upon your recognition of it.

Regarding 'you', you seem to be suggesting that you are consciously experiencing being a dendritic tree in a neuron, as opposed to you consciously experiencing being composed of multiple dendritic trees across multiple neurons, have I understood you correctly? 

Yours sincerely, 

Glenn

2016-10-21
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 
What I was asking was whether you are suggesting that what-it-is-like for the robot would change, but that it would not notice. The reason was that the plausibility of the suggestion would seem to relate to the plausibility of the suggestion that what-it-is-like to be you could repeatedly dramatically change but you not notice it.

I can understand that when you earlier wrote regarding room 3 that "the robot’s experiences are changing" it did not necessarily imply that you were discussing what-it-is-like for the robot, as functionalists can use the word "experience" differently, they could even deny understanding what was meant by what-it-is-like to be the robot. But the issue in the original post was about what-it-is-like for the robot in the third room, and I was not clear whether you were intending statement that "the robot’s experiences are changing" to be addressing that issue.

Yours sincerely,

Glenn 


2016-10-21
RoboMary in free fall
Reply to Derek Allan

Derek,

Since you lost interest in Glenn’s argument on the second paragraph of this thread, let me please just mention what I find problematic in your reply to Glenn.

A.      Glenn makes a claim about conscious experiences and tries to defend it with an argument. You reply in a Socratic manner and say something like “boy, before we can evaluate your argument, please say what a conscious experience is”. This invites Glenn to answer with a definition. He takes the Socratic challenge and provides you one. But now you take Meno’s part and say something like “ah! by making such a definition you’re just saying the same thing using different words. That’s a tautology! So, you don’t get very far, do you?!” It seems that you play the Socratic part only to drag Glenn to Meno’s paradox. I guess that you don’t do it just to Glenn, but to anyone else who makes claims about consciousness. Therefore, it is no wonder that you think that most people who study consciousness are actually talking nonsense (Nagel, Dennett, Putnam, Chalmers? Jackson? who’s next?). Playing the Socratic part, by asking what X is, and then dismissing the definition by saying it is tautology is not going to get you very far. The definition will be a tautology. We’ve got that from Plato’s’ Meno. So, what sort of explanation are you looking for? Which sort of reply would satisfy your question “what is a conscious experience?” Do you want examples? an ostensive definition?

B.      From your replies, it seems to me that you confuse tautologies with trivialities. You keep asking “what new information does this statement yield” as if this is the criterion for judging that a statement is a tautology. Well, I’m not sure what you mean by “information”, but sometimes, tautologies are nothing but trivial and provides us a better understanding of a certain phenomenon.  Let me state four types of such tautologies:

a.       mathematical statements (e.g. There is no largest prime number)

b.      valid logical statements

c.       a posteriori necessary statements (e.g. Hesperus is Phosphorus, Water is H2O)

d. philosophical statements

Now, I'm not sure about the forth type. Some people would follow Wittgenstein saying that in philosophy all we do is clarifying concepts, and so we simply make one tautology after the other. I think that Wittgenstein thought that in this sense in philosophy all we do is stating tautologies. Again, I’m not sure whether he was right. Still, waving at a philosopher who clarifies a certain concept by saying “yea, but that’s just a tautology”, may be similar to charging mathematicians for stating tautologies. My point is that as long as tautologies are not trivial, it might be useful to make them.

Yours,

Amit     


2016-10-21
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

Yes, I was talking about what-it-is-like-for the robot. If you insist we can talk about qualia. Again, Dennett has two possibilities: 

First, He might be a crude (personal-level) functionalist and therefore he would say that the robot’s experiences would change (and so the qualia/what-it-is-like-for-it would change) but that robot wouldn’t notice it. In reply, to your claim that the plausibility that  such a change in qualia would go unnoticed, Dennett would bite the bullet and say that you base this ‘plausibility’ on your intuitions. However, these intuitions are unreliable (this attack on intuitions is one of Dennett’s favorite moves in these sort of discussions).

The second option for him (which seems to follow his line of argument in the paper you address) is being a sophisticated (subpersonal-level) functionalist. Then, he would say that the qualitative state of the robot is determined by its brain state (in the paper he talks about being in state B). In this case, the brain state of the robot would be identical in both cases (whether it gets the stimulation from robot 1 or 2). Now, if in this case the robot would be in a brain state in which MARK-19 robots are in qualitative state of red, then also the robot in the third room would be in such qualitative state. If in such case MARK-19 robots have the experience of seeing blue, then so is our third robot. In sum, the subpersonal-level functionalist would not say that the qualitative state of the robot is changing. She would rather ask you for clarifications regarding the brain state of the third robot, and based on your reply would say judge whether it experiencing seeing red or blue.

Amit

2016-10-21
RoboMary in free fall
Reply to Amit Saad
Hi Amit

RE: “It seems that you play the Socratic part only to drag Glenn to Meno’s paradox. “

I doubt if I'm doing anything as grand as that. I'm simply trying to make sense of a proposition.

RE: “Therefore, it is no wonder that you think that most people who study consciousness are actually talking nonsense (Nagel, Dennett, Putnam, Chalmers? Jackson? who’s next?)

My comments related to specific propositions by said philosophers. I think they're nonsense and said why. Happy to do so again if necessary.

Re: “Playing the Socratic part, by asking what X is, and then dismissing the definition by saying it is tautology is not going to get you very far.”

Well, it will get me somewhere if the definition is a tautology. Again, let me ask you: if conscious means (approx) the same as awareness, and someone defines consciousness in terms of awareness, how is that not a tautology? More important (forgetting what term we use to describe it) how is that not a useless, uninformative statement?

RE: “My point is that as long as tautologies are not trivial, it might be useful to make them.”

Well, let’s be clear, tautology or not (and I think you're making a mountain out of that particular molehill) it was most certainly trivial. You ask me: “So, what sort of explanation are you looking for? Which sort of reply would satisfy your question ‘what is a conscious experience?’” Very simple: something that would help me understand what this extremely puzzling thing we call consciousness is. (And telling me it is about “awareness or “experience” is utterly useless in that regard.) I am personally not sure that anyone will ever be able to throw much light on the subject. Certainly no philosopher I’ve read so far has managed it.* Is that defeatist or something? By no means. It’s just philosophy – philosophy that refuses to treat half-baked arguments as the real thing. And since you seem so keen on Socrates etc, wasn’t it Socrates who said that the most important thing in philosophy is to recognise what we don’t know?

DA

* I think Dennett’s contribution for example is a bad joke – Chalmers also. And Nagel. The endless “neuroscientific” stuff that some like to trot out is utterly pointless, in my view. The only person I’ve read who has some interesting (if marginal) things to say is Fodor.   



2016-10-22
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,You are enunciating the 'thing-ist' view rather nicely. Leibniz says that however intuitive it may be it is logically incoherent. He produces the principle of identity of indiscernibles, which in a sense says what you conclude with : '' somehow thinking that the difference depends upon your recognition of it.' except that it is a matter of what is in principle possible. So if in principle there is no possibility of discerning a difference between two entities they are identical. This generates some very counterintuitive effects - as for there being no fact of the matter 'which electron' is occupying an orbital. It seems that there are no 'instances' that have dynamic properties, just instances of dynamic properties. Leibniz wins out because early quantum theorists discovered that the empirical evidence proves that he is right. If you cannot in principe distinguish two 'particles' then they behave as if the same particle - with a quite different maths from two particles. I am not up on anti-matter but I am pretty sure that there is no intrinsic difference from matter. Negative and positive charge are just conventions.

So we have to throw away completely our traditional idea of things with properties. And once you do that everything suddenly becomes much more consistent and elegant. Things and properties belong to a crude popular view of the world that works in everyday life but not in modern physics and not in any consistent metaphysics. What I learnt when I became a philosophy student after retirement from science was that most philosophers refuse to make the move from the intuitive position. There seems to be a belief that if words are used by ordinary people and Aristotle in a certain way that must be the correct way. What one learns in science is that one has constantly to re-adjust one's concepts and word meanings as one learns clearer models of the world.

The most egregious example of this is the insistence that it is 'a person' that sees or knows. Science going back to Hippocrates has made it pretty clear that it is very unlikely that there is any entity that can ascribed both a relation of seeing and a relation of knowing. There is no such thing as a person that sees and knows, so the Mary story is built on sand. There is no such thing as 'a robot' that might see and know. These are pseudo concepts once one gets down to a fine enough grain to be relevant to the discussion in hand. Insisting on hanging on to traditional usages for words just leads to endless circular nonsense.

Yes, I strongly suspect that an 'I' is a consciously experiencing dynamic unit in a single dendritic tree - of one not many neurons. It is probably confusing to say that an I experiences being a dendritic tree because what the I experiences is everything else rather than its own nature. There would of course be thousands or millions of such 'I's in the brain of the human body called Jo Edwards or Glenn Spigel. That may sound strange but as I see it the joy of academic discussion is the fact that one can so often be proven totally wrong in one's ideas and thereby move on to much more interesting ideas. Present day philosophers seem more interested in putting up barricades to ensure they never change their views. It seems so boring.

2016-10-23
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 

So regarding the first response, if I understand you correctly, it would be the claim that if a person a scientist perhaps, had bionic eyes in and the same thing was done as in the thought experiment, and the scientist reported that there was no change in colour experience in the third room, then the functionalist would just claim that the scientist was mistaken and that really the colour was dramatically changing but the scientist failed to notice. 

With regards to the second response yes they could claim it will be either red or blue, but on what basis will they decide which. James Of Seattle had written a response earlier (and sorry for not replying separately if you are reading this James, but the site only allows me 2 responses a day, and I have gone with responding to the two conversations I was progressing with, in the hope that someone might in the meantime respond to you) which seemed to suggest that suggesting that it would depend on some internal mapping to colour, but there is no internal mapping to colour only internal processing based on A, B, and C channel variables, what basis is there to link any of them with a conscious experience of any of the colours in particular? You could suggest that if it could talk then what it called the A, B, C colour values would determine the answer, but what if it called 255.0.0 "red" when it spoke English, and "bleu" when it spoke French. Would a bunch of philosophers holding hands declaring one over the other hold any weight?

Anyway, that the functionalists could declare two or more different things regarding what it was consciously experiencing (the colour changing would be changing, it would be staying the same...), but it just goes to make the point. There was a difference between knowing how it would behave and knowing what-it-was-like. The answers are not compatible, and how could they tell which was correct, especially with the first type of functionalist arguing that any observational evidence on what would happen in a human does not count as evidence? 

Yours sincerely, 

Glenn  



2016-10-23
RoboMary in free fall
Hi Jonathan, 

With the matter and anti-matter are you suggesting that they are the same in this universe, because with a scientific realist type ontology, I do not see how they can be. Matter (M) does not react with matter (M) the same way it does with anti-matter (A). From https://en.wikipedia.org/wiki/Antimatter:
"Collisions between particles and antiparticles lead to the annihilation of both, giving rise to variable proportions of intense photons (gamma rays), neutrinos, and less massive particle–antiparticle pairs. " Which is not what happens when two particles meet or when two antimatter particles meet.

So I was thinking of it as kind-of like:
 
M + M != M + A  ⇒ M != A

If you accept that in this universe there is a difference in reaction between when a matter particle meets a matter particle and when a matter particle meets an anti-matter particle, and that this shows that they are not the same: M != A. Then thought experiment is just creating a scenario in which an inhabitant of the universe could not tell whether they are calling M or A "matter", so that the type of reasoning you were outlining would kick in and result in the conclusion that "matter" M = "matter" A, in other words M = A. But just to reiterate we can tell from scientific experiments that M != A from M + M != M + A  (in a sense regardless of whether M and A had an ontological existence or where just modelled in the mind of God for example), and thus their conclusion was wrong. So the thought experiment seems to me to show a flaw in the reasoning you were attributing to Leibniz, because it shows how such reasoning can lead to an incorrect conclusion.

Regarding your single neuron idea do all brain cells exist the whole life of the human, or does the one that we are experiencing being in your theory? If not then are you are of the mind that people's life expectancy might not be as long as they assumed (as they are a single neuron, and it could quite easily not last as long as the human)?  And if so, have you any idea on the likelihood of your neuron lasting until the human dies? Also would all the consciously experiencing neurons be having the same experience as yours, or is it just coincidence that yours (you could have been one of the others) is having an experience appropriate to the human behaviour, so much so that it seems to be reporting what you are experiencing? 
    
Also regarding the spin of a quark,  I had mentioned that "if you were meaning that the spin was an extrinsic feature of a quark instance, then what instance would the spin be an intrinsic feature of (this assumes an ontology of instances which have features, please correct that assumption if it is wrong)?" I am not sure if you were planning on clearing the other points up first before coming back to it, but I thought I would just mention it. 

I agree with your sentiment about being open to discussion and being willing to change your mind if on consideration you think that if you were unbiased about what was being discussed that you would. It was a concept I quite liked in Descartes Meditations where he made an attempt at re-examining what might have just been his biases. I assume most philosophers put up barriers because of career issues. 

Yours sincerely, 

Glenn


2016-10-23
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,You are stuck in the thing-sit intuition I fear. If there might be one useful outcome of this discussion it might be to persuade you that there is a viable alternative - the dynamist position. Dynamism is the only position compatible with modern physics so I would recommend it. Other wise you do philosophy in a way that is inconsistent with the actual world - hence all the confusion over Mary and suchlike.

Fo X to be different from Y is for X to have a disposition or power that makes it possible for it to have a causal relation of difference to Y under favourable circumstances. A physical universe is a collection of entities that have the possibility of causal interaction in this way. Matter in another universe has no power to relate causally to anti-matter in our universe so it cannot be different from it in the way that matter in our universe is. Whether it can even be notionally different is an even bigger unknown that casts serious doubt over all philosophers' arguments about possible worlds.

So yes, of course we are assuming that matter and anti-matter are different in this universe, but none of your further steps follows. You need to get off the rail tracks of thinking in terms of imagined 'things'. We can only usefully have a discourse about events, happenings or processes involving causal relation. Anti-matter in another universe cannot be said to be more like anti-matter than matter in our universe, unless, as said, there is some asymmetry in the powers known in our universe, which makes the analogy unhelpful.

Your questions about individual neurons are very familiar to me - the ones most people pose. But they are based on not actually reading what I wrote. I said nothing about a special neuron being me. I said that millions of neurons probably each experience being me. I have no reason to think there is one Jo Edwards subject or one Glenn Spigel subject. I used to assume that because I was brought up in a culture that assumed it. However it was not assumed in intellectual circles in the mid nineteenth century, not by Elizabeth Anscombe and it has been based on evidence or reason.

It does not matter how long neurons last. Most brain neurons present at age five probably last a lifetime but it does not matter. All that matters is that at any one time there are a few neurons around to be subjects. As John Locke pointed out there is no need for any enduring self. We have no reason to believe that a human subject that experiences being Glenn Spigel today is the same as one that had a similar experience five years ago. Many of the molecules involved will have changed, as will the structure of the organelles. We cannot pin our identity on continuity of matter. There may be dynamic entities in cells (like Bose modes) that last decades but we have no need to require that.

I think philosophers put up barriers to new ideas not for career reasons but because they are frightened of changing their thinking framework or simply cannot see how to. A lot of people find it very hard to break out of intuitive realism. But any serious analysis of what the world is about requires that - it is what philosophy has been about since Parmenides pretty much. So I tend to think we have an irony that people who go into philosophy are often those who cannot cope with real philosophy. They stick to playing with words, which is no good without acquiring new concepts.


2016-10-23
RoboMary in free fall
That should have been:

not by Elizabeth Anscombe, and it has not been based on evidence or reason

2016-10-24
RoboMary in free fall
Reply to Derek Allan

Thanks Derek,


I think that I understand your position now more properly (and I have lots of respect to this view). Consciousness is indeed a tough one. I’m not sure that I can give you a non-trivial/non-circular account of this phenomenon.

But, please let me ask you this. Think of a philosopher who says something like:  “Well, there’s this expression ‘consciousness’ that people use, and seem to describe a certain phenomenon. They also use many other expressions like ‘awareness’ and such to describe this phenomenon. I can’t say much about this phenomenon, though I can give you some examples for ‘conscious experiences’ (I can also provide some examples of things which are not called ‘conscious experiences’). If you ask me to explain what consciousness is, all I will be able to give you is just circular accounts using words like ‘awareness’ and such. I cannot analyze this phenomenon by using mere physical or biological expressions like ‘atoms’ and ‘neurons’.” 

Now, this person won’t stop here, and will also make the following claim: “But, it’s not just me who cannot explain this phenomenon. I think that it is impossible to explain this phenomenon by mere physical/biological expressions. Hence, I argue that every adequate account of what we mean by ‘consciousness’ would be trivial/circular/uninformative. In order to defend this claim, this person would make an argument like Jackson’s knowledge argument, or Nagel’s bat (or any other dualist argument).

If you say in reply, that this person ought to provide first a non-trivial explanation of consciousness, he will answer “but that’s exactly what I’m claiming we cannot do”. I get the impression that Glenn is such a philosopher (maybe I’m wrong).

Now, consider also another person who says in reply: “Well, I also can’t give you a non-trivial account of consciousness at the moment . However, I think there’s a lacuna in your dualist argument. Hence I’m not convinced that it is impossible to explain consciousness by reference to physical/biological expressions. (Maybe Dennett would like to make this claim).

Don’t you think that such a debate is worthy and makes lots of sense?


Best,

Amit  


2016-10-24
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn.  Thanks for squeezing me in there.
In a previous post, Amit Saad described two possible responses from Dennett to your scenario.  I honestly don't think Dennett would proffer the first type, and I hereby reject the same. The example I presented illustrates the second of those possible responses. The relevant brain state in the case of your scenario is the state of the channels of the robot in room 3.  Amit writes:
 In sum, the subpersonal-level functionalist would not say that the qualitative state of the robot is changing. She would rather ask you for clarifications regarding the brain state of the third robot, and based on your reply would say judge whether it experiencing seeing red or blue.
Again, this situation is what I tried to describe in my example (except I inserted RoboMary, who doesn't have to ask because she knows everything).  In your recent response you (Glenn) said: "[T]here is no internal mapping to colour only internal processing based on A, B, and C channel variables" . Exactly what internal processing is based on the channel variables? The processing can't involve colour because there is no mapping of the channels to color. As presented, there is no basis to link the incoming signal with a conscious experience of any of the colours. The functionalist(?) view I am describing says that if there is no prior mapping to colour, then there will be no experience of colour. Period. (Er... Full Stop.)  The experience does not ride along with the incoming signal.

*

2016-10-24
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

 

a.       Regarding the first response- the answer is ‘yes’. The experience of the scientist would be changing but he would fail to notice it.

b.      Regarding the second response—and this is important—I think that you miss something here (it seems that James of Seattle was making the same point). One of the assumptions of the thought experiment is that Mark-19 robots have qualitative states. Especially, when they get the stimulation 255, 0, 0 through that channel they have a certain qualia. After all they are analogues to humans, who get stimuli through the optic nerve. So, let say that such an ordinary Mark-19 robot is in a qualitative state of seeing red when it gets this stimulus 255, 0, 0. This is an assumption of the thought experiment.

Now, we have three robots which are exactly like every other Mark-19 robot, except for the connections of their cameras. RoboMARY, for example, is such a defective robot. The other two are the robots on the second and third rooms. The sophisticated (sub-personal) functionalist would say that the qualitative state of the robot in the third room is determined by the function of a certain unit in its brain (James calls it the inner mapping of colours). Since, the unit functions similarly in every Mark-19 robot (this is an assumption of the thought experiment), it follows that when it gets the stimulus 255,0,0 this robot is in a qualitative state of seeing red (BTW- this would also be the qualitative states of the two other robots in rooms 1 and 2). The functionalist does not need to say how she would identify the robot’s qualitative states. She does not need to ask the robot. We assume that the brain of the robot functions just like any other Mark-19 robot, and we assume that when such robots get this stimulus they see red. Her claim is that the experience is determined on the subpersonal level. Once you say that ordinary Mark-19 robots have the qualia of seeing red when the subpersonal level gets this stimulus, she would conclude that this is the qualia of the robot on room 3.

  

If you prefer, we can talk about humans instead of robots. Assume that a neurosurgeon plays with your optic nerve. He cuts and reconnects the nerve fibers, but in such a way that when blue light strikes your retina, the neural stimulus through the nerve, would be identical to the neural stimulus that you would get when red light strikes your eye at the moment (and vice versa). In such case, the functionalist would say that the qualitative state that you have now when you see red objects would be similar to the qualitative state that you’ll have after the surgery when you see blue objects.

 

c.       Notice that though the two types of fucntionalists provide different answers regarding the experiences of the robot, they both don’t need to accept the distinction between what-it-is-like-for the robot, and how the robot will react. The first functionalist says that the reactions of the robot on the personal level determine its experiences, and since on the personal level the reactions would be different, she would say that the qualia would be different. The second functionalist would say that the reactions on the subpersonal level determine the qualia. Hence, since the reactions are similar in the two cases (the two conditions of the switch), the qualia is not changing. They’re not forced to accept the distinction you make by this thought experiment.

d.      If you rather, we can discuss this over the email (amitsaad1984@gmail.com)

Best,

Amit


2016-10-24
RoboMary in free fall
Hi Jonathan, 
So imagine that God modelled in its mind this universe in a similar way to physicists and then decided upon what a conscious experience of such a universe would be like. An majority anti-matter universe would have a different model. But supposing God did not model it in such a fashion, but modelled the entities of the universe in terms of their relational interactions, then it be stated that there was nothing to the model other than the the relational interaction description, and that it was the same in the models of both universes. Because the entities modelled were given no intrinsic features. The cause of the regularities in relationship was God, and God was not causing the regularity of relationship based upon any modelled intrinsic feature of anything in the modelled universe. Fine there it is easy to understand that the two universes (the one imagined to be mainly matter, and the one imagined to be mainly anti-matter) would be same in the sense that the same model is used for both. 

But with a physicalist account I am not quite clear of the ontology. I could imagine a physicalist account suggesting that there was no such thing as a cause (other than in the sense of a self-cause), with its physical content being undistinguishable : the apparent distinctions in form being a fundamental intrinsic feature of the undistinguished underlying. Is this the kind of account that you are getting at? 

Alternatively if it were simply a suggestion of a physicalist account which had some features being causal, say a force being considered to cause a certain relationship, then it would seem to have the problem that the cause would be an intrinsic feature of the force, and that would mean that matter forces and antimatter forces would have different intrinsic features causing the relationships the force had to other forces to be different. There the mainly antimatter universe would have different intrinsic features from the matter universe.  

If I assume the first alternative (the undistinguished content with uncaused fundamental intrinsic features) then presumably you would be considering that the whole evolution of the universe from the time of the Big Bang was a fundamental feature of the undistinguished underlying and that the ability of physicists to be able to model the expression in terms of variables was a coincidence, since there would be nothing in the reality of the situation that the variables corresponded to. Is that the case? If so I would have thought a theory that could explain the correlation would be considered superior to one that could not. However, you seemed to make the claim that only that way of looking at it was compatible with physics, and would therefore be claiming that there could be no other theory compatible with physics which could explain why physics was able to model the apparent distinctions using such variables. So what are you suggesting is incompatible with physics about the idea that only mind exists, albeit of different scales of ability, and you being given an experience of this universe by another mind of greater ability than your own (for example)? 

Regarding the neuron idea of yours, you seem to be stating that millions of neurons are experiencing being you in the same way. So is there some identical pattern to be found in each of them, or are they experiencing different patterns in the same way, or is there no pattern required for the experience? Would a neuron in a brain-in-a-vat type scenario be expected to have the same kind of conscious experience for example?

Also I am still not clear on your position on what is  ' you ' . Are you of the mind that while  ' you '  have the sensation of moving through time, the universe is a block universe in which no content changes spacetime coordinate and that 'you' are the experience of what it is like to be a neuron existing within a certain spacetime locational range, but that at the different spacetime locational ranges there are other neurons having the same experience of 'you'? If so then why should your experience be linked to a spacetime locational range as it would be in the case of a neuron? Or was it perhaps that it was the experience of a spacetime location which is located within a neuron?

Thank you for this conversation, as it has led me to consider things I had not previously considered.

Yours sincerely,

Glenn


2016-10-24
RoboMary in free fall
Reply to Amit Saad
Hi Amit, hi James, 
I hope you do not mind me combining my response to you both in one post. 

Firstly Amit, in point (c) of your post you seemed to me to not understand my point. Yes one functionalist could claim that the experience would remain the same and the other could state that it would change, but without an argument which showed how they could tell the other was wrong, then while they could know how the robot would behave they could not claim to know that the other was wrong and not them, and therefore could not claim to know what-it-it-was-like for the robot. James you seemed to state that the other was wrong, and that your reply would be correct, but how would you know that? Without an argument to show it, the distinction would exist. You would know how the robot would behave, but not that you were correct. Knowing you are correct is different from having a theory about what is correct.

Regarding point (b) I would prefer to talk in terms of robots as it is simpler, plus I am not of the mind that we are like robots. I just switch to humans to put in perspective what the theory would be implying about humans. So with regards to the Mark 19, imagine that the way the pixel channels A,B,C are used in motion detection are that subsequent channel values for each pixel are compared and differences acted upon. Notice the processing does not depend on what those channel values represent, or imply any representation. That for object detection, comparisons are made between the channel values of adjacent pixels etc. That for discussion of object colours, channel values are mapped to words. Imagine it had its own language for channel value combinations, so if you knew the channel values to words mapping then it reporting an object was "blah" would indicate to you what ranges (assuming there was not a word for each combination) the channel values were within. Although we might write 255.0.0 the processing can be done in parallel for each channel so it need not even be regarded that "one channel is first the other second..." Imagine a scenario where such a Mark 19 was only ever found with RGB cameras in, on what basis could a functionalist declare 255.0.0 not to be red? Likewise imagine a scenario where such a Mark 19 robot was only ever found with BRG cameras in. On what basis could a functionalist declare 255.0.0 not to be blue? Can you not see that any argument made that it should be considered as red could equally be made that it should be considered as blue, as neither require any difference in processing. The idea that there need be some indicator in the way the channel is processed is false, at least in the computers we use. Yet you both seemed to be holding that assumption, and in the argument asserting it in the claim that special processing would need to be done if it were to function as processing on green intensity values for example , but I cannot understand what basis there is for such an assumption given in programming there is no such requirement. An 8-bit variable could represent red, green or blue, and there is no need to process them differently. Sure you might map  the channel values to words such as "red", "green", or "blue", or "rouge", "verte", or "bleu", but those words could be changed, the language could be original for example, or as I suggested in an earlier post 255.0.0 could map to "red" in English but "bleu" in French. The usefulness in considering robots is that as far as I am aware these points are undisputable. There is no need for different processing such that you could just look for which channel maps to which type of processing to give you the answer as it seemed like you were thinking. Thus for room 3 any argument that 255.0.0 would be experienced as red could be modified to argue that it would be experienced as blue with neither argument holding any more weight than the other. As a side issue I have a similar argument to do with what processing represents to argue that beyond reasonable doubt what we are experiencing is by design. Which I will post to the group later. 

 Yours sincerely, 

 Glenn

2016-10-24
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

The problem with the term ‘physicalist’ is that nobody is quite sure what it means. I think it mostly means that all causal relations, including those involved in phenomenal experience, are covered by the laws of physics. Physics has no ontology beyond its dynamic relations. ‘Cause’ in its traditional form has to be updated but dynamic relation replces it in modern parlance. A lot of people think ‘physicalist’ referes to some sort of ‘physical’ quiddity of ‘physical stuff’ but that has nothing to do with physics – it is just a man in the street idea.

 

I should perhaps clarify what quiddities physics disallows and what it is neutral on. Contemporary physics disallows any token quiddity of the sort that might distinguish electron A from electron B over and above their histories of dynamic relation. Physics may not disallow type quiddity in the sense that it could allow there to be an ‘electron quiddity’ that underpinned a certain charge and spin. It is more that physics makes this absurd or implausible. Such a quiddity would be something we have never had any means of knowing of or ever had any experience of anything similar. Although people often assume things must have an ‘inner nature’ beyond what they do this would be the one thing nobody could possibly have ever had reason to propose or teach someone else to assume.

 

You ask: ‘So what are you suggesting is incompatible with physics about the idea that only mind exists, albeit of different scales of ability, and you being given an experience of this universe by another mind of greater ability than your own (for example)?’

 

I think that begs the question as to what mind is. If it has the dynamic relational properties we find in physics then that is fine. It is after all the Leibnizian position that I prefer.

 

I think it is unlikely that every dynamic relation in every neuronal dendritic tree that is associated with experience (if it is) is identical. Each cell will be slightly different. However, I suspect there is no meaning to the question as to whether they ‘seem the same’ – i.e. that the experience is ‘the same’. If there is no possibility of comparing two experiences because they belong to different subject then there is probably no fact of the matter whether or not they are the same. That may be too sweeping, however, in that one might be able to say that one was much more complex than the other. The same consideration would apply to a nerve in a vat.

 

In my view ‘I’ as human subject is probably a short lived mode of excitation in a neuron with a domain of non-trivial dynamic relation of less than a cubic millimeter and perhaps 20 milliseconds. It could be a lot longer than that but that gets difficult. I am not sure that I have a sensation of moving through time. That seems a cultural metaphor. The block universe idea in which ‘nothing changes’ seems a bit question begging again but by and large I do think each mode experiences the content of its non-trivial domain. So the continuity of ‘I’ is an illusion. At each point in time a difference mode is doing the experiencing. It may have a sense of experiencing before but that is likely just to be an idea fed it by the rest of the brain.

 

As to why ‘my’ experience should be related to a neuronal domain that is simply because ‘I’ in this case is a mode in that neuron. There is no ‘Me’ or ‘I’ in addition to those in each neuron. A lot people find that hard to understand but that is the parsimonious assumption – and the only viable one I can think of.

 

 


2016-10-24
RoboMary in free fall
Reply to Amit Saad

Hi Amit

RE”….Now, this person won’t stop here, and will also make the following claim: “But, it’s not just me who cannot explain this phenomenon. I think that it is impossible to explain this phenomenon by mere physical/biological expressions. Hence, I argue that every adequate account of what we mean by ‘consciousness’ would be trivial/circular/uninformative.”

Perhaps I should be clear that I’m not trying to argue that nothing enlightening can ever be said about consciousness (if that’s what you are inferring from what I’ve said? – I’m not sure). It would be sheer dogmatism on my part to say that. How could I know?  I’ve simply been criticising what strike me as inadequate approaches – not only inadequate, I should say, but very annoyingly so because they seem to me to trivialise a question that is so hugely important. What could be more central to an understanding of human life and conduct, in all its high points and all its unspeakable horrors, that the nature of our consciousness? Yet what do we get from philosophers – especially the “analytic” kind? Puerile stuff about Hollywood zombies, vacuous arguments about “something it is like”, fantasies about brains in vats, endless meanderings about neurons, silly diversions like panpsychism, etc etc. All that manages to do, to my mind, is demean philosophy – reveal that it has yielded to a kind of creeping infantilism and is no longer a serious study of human life and experience.

RE: “Hence I’m not convinced that it is impossible to explain consciousness by reference to physical/biological expressions.” (Maybe Dennett would like to make this claim).

Of course, it’s open to anyone to try to defend this view. The problem is they so rarely do. They go on endlessly about neurons etc without showing what relevance that has to human consciousness, and usually without even being being able to say what they understand by the word (which as I recall Dennett even admits to). It’s perfectly obvious that consciousness could not exist without the brain but that tells us precisely nothing about the nature of consciousness.

This is a fairly quick response and I’m not sure it addresses your comments as directly as you would have liked.

DA


2016-10-25
RoboMary in free fall
Hi Jonathan, 
Physics seems to me to be the study of patterns of regularity in what we consciously experience which have been analysed through a combination of inductive and deductive reasoning to give a system with predictive abilities. It is compatible with idealism for example, which is what I was referring to by the concept of minds being given the experience by an other mind(s). By mind, I just mean that which consciously experiences. So I was not sure on what basis you were claiming that dynamism was the only conception compatible with physics. 

It seems to me that your conception contains certain metaphysical assumptions, for example that corresponding to what we experience is a universe which contains a form in spacetime. While at the same time seeming to make a claim that it does not make any metaphysical assumptions, which if it were would be a false claim unless it was a version of idealism in which the source of your conscious experiences were unknown. Smuggling metaphysical assumptions into physics (which I was referring to as the mainstream interpretation of physics) and then claiming not to make any metaphysical assumptions instead just being an account based on physics can be unpicked and such a claim can be understood to be false (the smuggling of the assumptions be exposed). I am not suggesting that you are claiming not to be making any metaphysical assumptions, or that your claim is anything more than a version of idealism in which the source of your conscious experiences is unknown. I am still trying to understand what your position is.

I am not sure what you meant by the comment that "the block universe idea in which 'nothing changes' seems a bit question begging again but by and large I do think each mode experiences the content of its non-trivial domain", because I do not know what question you had in mind, or who you thought would be doing the question begging. Was it that you felt that my suggestion that in the block universe model that the content at each coordinate is fixed was question begging? 

By physicalism I meant it contained the idea that universes were the containers of existence, and universes contained space and within that space could be forms which would be considered physical. That unlike materialism where there is the idea of matter in space, physicalism also allows for the idea that there are just fields, as in quantum field theory for example. I was thinking that you might be thinking of the universe containing a field in spacetime. Is that correct, because if it is, then how are you explaining the field having a pattern which can largely be modelled by certain variable relationships as is done in physics. Since what would those variables be representing in your view of reality?

Regarding your view  that " 'I' as human subject is probably a short lived mode of excitation in a neuron with a domain of non-trivial dynamic relation of less than a cubic millimeter and perhaps 20 milliseconds": Why are you considering yourself to be segment of spacetime rather than a point in spacetime. The reason I ask is that you seem to be making some ontological unit out of a segment, but why should a segment in spacetime be regarded as what would be special about that segment such that it should it should seem to gain an ontological status. To be clear I am distinguishing that from a claim that if you took any point of the field in spacetime then the conscious experience at that point would be depend upon some relation to its surroundings and that the relation involved proximity and the content variation across the spacetime coordinates within that proximity. Is it more like the latter but that you are just referring to all the relevant content to be 'you'? So 'you' are relevant content at a certain spacetime coordinate, and 'I' am the relevant content at another spacetime content, and that although I think of myself as having consciously experienced yesterday and will go onto experience tomorrow or the next few seconds 'I' will not any more than 'I' will experience what-it-will-be-like to experience one of the neurons in the human form that yours is imagined to be in. Neither 'I' nor anyone else has any continuity of conscious experience varying over even a second of time if you are correct, is that what you are suggesting? So that what we experience is only ever that proximal to a certain point in spacetime.   

The reason I was asking about the brain in the vat (and this relates to an argument I was going to post later but I might introduce it here) is that I am not clear on what the suggestion is in your account for why I should be experiencing typing on a computer for example. Where is that in the neuron? If we look at the original post, it highlights the issue of the functionalist story and its attempt to explain the experience. But I could go further. The Mark 19 robot brain could be in a vat, and the signals come from a a totally different source, indeed you could imagine a billion different scenarios where the source of the signals was different, but coincidentally were the same signals. Unlikely perhaps but, unless it was explained how in the account the unlikeliness mattered, the unlikeliness is irrelevant. You might wonder where I am going with this, and while it might take a few posts back and forth to explain, I will try here. With functionalism, as I understand it, the idea tends to roughly be that the conscious experience depends upon what the processing represents given the context. So what function the processing is performing. But in the billion different contexts what is a Mark 19 computer brain in one could be a fairy light controller in another, lighting up fairy lights based upon the inputs it gets (which in one scenario could have been sound encodings, in another something else) for example. So the function changes depending on context and thus presumably the conscious experience, however if it is stated that no matter the context, the function it was performing in one was special, and the experience would always related to that one, then there would be the issue of what was special about it? With the special idea, the conscious experience would be based on the activity always having a certain representation or symbolism, and symbolism is not something I would have expected to have found in an undesigned universe. But as I understand it, "bionic" eyes etc. could indicate whether the conscious experience does not relate to the source of the signal in the experienced universe. I assume the experience does not change dependent on whether the signal resulted from a camera detecting light, or a scientist wirelessly sending it as a test signal as long as the signal was the same. That can be contrasted with an imagined conscious experience which mapping to some 'physical' attribute, for example experiencing a visual flash of light every time a neuron fired, or a visual sensation of a colour mix based on which chemical bonds were broken and and auditory mix based on which were made for example. Hopefully you have managed to follow that, though I realise I may not have been clear, but relating it back to your account, in your account, since you seemed earlier to suggest that the conscious experience could be the same in a neuron in a brain-in-a-vat, it does not seem that you are suggesting the conscious experience is based on the context the human brain is in. So are you suggesting that it symbolises the computer I am experiencing for example (as it would have to in a computer for example) or are you suggesting that the experience maps to relational attributes within the neuron? Obviously idealism can provide an answer for why the conscious experience was based it was a particular symbolism for example (the others not being fit for purpose), but I am not sure if you are denying a symbolic link and if so what your suggestion is.    

Yours sincerely, 

Glenn



2016-10-25
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

I agree that physics is compatible with idealism if by idealism you mean the sort of approach Leibniz uses. He is often called an idealist, but he makes it clear that there is a real universe of goings on that is distinct from the point of view that is the subject/monad, even if that universe is ‘reflected within’ the monad. Bascially, it is not an idealism that says that there is no mind-independent reality. Even for the monad much of the real universe is only perceived in a totally confused and indeterminate way, but it is still ‘out there’ in infinite detail. Put another way all goings on are reflected in all other goings on, if confusedly and do not belong to and are not created by any particular observer observing.

 

So if idealism just means that everything is ‘mental’ that is fine, except that if we want to have useful descriptions of the interactions between such ‘mental’ entities the best system we have is physics so it is unclear what is gained by calling events ‘mental’. Dynamism is the neutral position of saying that all we presuppose are the dynamic relations, without committing to calling things mental or physical. It is the only approach consistent with physics in the sense that it denies any additional token quiddities.

 

I certainly do not assume that we are in a universe that ‘contains a form in spacetime’. That is thing-ism again. In Leibniz’s terms space and time are abstractions from the quantitative aspect of dynamic relations. This was his big argument with Newton. There are no things in anywhere. So no, I am not making those metaphysical assumptions.

 

A block universe that is somehow ‘static’ or ‘unchanging’ is a metaphor that shifts 4D Minkowski space into a language of 3D space (fudged to 4D even though nobody can do that) without time – asking one to imagine a block sitting there. It is thing-ist again. The block universe does change because change is what you get with time and Minkowski spacetime has time in it. It is just playing around with envisagings of metaphysical problems when we know envisaging can only confuse. Space and time are not ‘like anything’ except in the sense that they are like what they are like to us in ordinary experience. So questions like ‘has the future really already happened’ are just meaningless.

 

With regard to what my field variables would be representing I don’t think they represent anything. They are dynamic field variables. They are the pattern of the relation. There are no ‘things’ that they represent.

 

Fifty years ago as a teenager I arrived at a metaphysical framework that I called electrical point consciousness. I ascribed sentience to every point in the universal EM field. This is in fact what Leibniz did in his youth (around 1670). However, on returning to the problem forty years later I realized that a point in spacetime is no good because it has no dynamic power of its own – it is just a place – and in fact because of Heisenberg uncertainty there may be no real infinitesimal points. Leibniz seems to have realized something similar and shifted from a’physical point’ to a ‘metaphysical point’. A metaphysical point is not one point in space but rather one dynamic individual. It has a point of view but that is not from a point but from an ‘aspect’ that is focused on a domain that is its ‘body’ but continues out to include the whole universe. This is exactly what we see in a quantum wave equation. We have a V term that has no boundaries and therefore covers potentials in the whole universe, but values for V only affect the solution to the equation non-trivially in a local domain often called a ‘wave-packet’. For the modes of excitation that occupy ordered bits of matter the analogy with Leibniz is particularly striking because the values for V for an acoustic mode are tied very tightly to the domain of that bit of matter. Thus if there is piezoelectric coupling the photon field to which a phonon is coupled is tightly restricted to the ‘body’ that vibrates.

 

In other words, like Leibniz I am not cutting out a segment arbitrarily. I am giving the subject a special relation in a domain defined by the wave equation but a notional relation to everything else as well. Also we are not talking of thing-ist content of spacetime but always of relation to a mode. Each ‘I’ is a different mode. Since our brains are not overlapping there will be very little overlap in the domain of non-trivial influence on a ‘you’ and a ‘me’. But of course if we were talking of robots we could probably construct one in which the field of potentials influencing two subjects overlapped.

 

Yes, I think it unlikely that any human subject, in the form of a mode, lasts more than a few milliseconds. However, there is something very odd about acoustic modes here which relates to their being Bose modes with a wide range of possible energy contents (i.e. with a wide range of possible quantum number for the notional ‘number of particles’). This is where it is very hard to read the metaphysical implications and it might be that we can legitimately treat a neuron as an enduring subject over many years.

 

If you experience typing on a computer my suggestion is that a neuron gets an input of potentials that have a pattern that usefully indicates typing on a computer. The neuron could be in a vat, as long as someone plugs in synaptic boutons that will fire the right pattern of signals. Whether in that situation the pattern would be 'useful' is doubtful but that seems obvious, and an unrelated question. This is why I think functionalism is a dead end. It gets tied up in issues of broad and narrow content and Putnam’s confusions about externalism which arise purely from philosphers not taking care with their words. Before discussing things like RoboMary one has to be very clear what one is meaning by reference and meaning etc etc and to be aware of the ascertainment problems that cloud all statements about what experiences are like.

 

The weasel words in David Chalmers functionalist account are ‘of sufficiently fine grain’. If function is dynamic relation then to get the right experience you would need the right relations – and presumably down to the finest grain there is to get it absolutely right. That ends up with an identity theory that no longer needs to be called functionalism. I cannot see any conceivable reason why events in silicon chips should seem to some mode within anything like what they feel like to modes in neurons, even if one was allowed to make such a comparison. Apart from anything else inputs to the electron modes that support semiconductivity in a computer would only seem to have two relevant degrees of freedom, instead of 40,000 in a neuron.

 

 

 


2016-10-26
RoboMary in free fall
Reply to Glenn Spigel

Glenn,

I’d like to distinguish between what we take to be true and what the argument actually proves. I believe the situation is such:

Nagel brought up an argument which presumably shows that there is a distinction between how an individual would react and what-it-is-like-to-be that individual (we know how a bat reacts, but don’t know what it is like to be a bat). Dennett in reply rejects Nagel’s intuition, and argues that such a distinction has never been really argued for. Now, your argument should force Dennett to accept that there’s such a distinction (logic should take him by the throat). How can your argument do it? Well, you should take the functionalist’s position as an assumption, present a scenario and show that this assumption yields either a contradiction (that a robot both sees red and blue, for example), or that the account leaves some room for indeterminacy (that it is indeterminate whether the robot sees red or blue). I was under the impression that you argue that a physicalist account yields indeterminacy (you’ve said there’s no way for a physicalist to establish whether the robot sees red or blue).

Unfortunately, this is not the case. James and I show you two functionalist positions which don’t leave room for indeterminacy. We start from the functionalist assumption and show you how the account determines what the robot sees. This is where your argument fails. You can challenge in reply the functionalist assumptions (how can we know that the functionalist is right? why should we take the functionalist position as a starting point?). But this is another question, beyond the scope of the argument. At the moment, if one holds functionalism, one shouldn’t be impressed by the argument.

Now, regarding the side issue question- you ask how we can decide which of the two forms of functionalism is correct (personal or subpersonal level functionalism). You think that since both of them are in line with the physical facts, there is no way for answering this question. But, the obvious answer is that the arguments for the different versions of functionalism/physicalism do not rely merely on physical facts.

For example, the main argument in support of personal level functionalism, is that the criteria for applying the term ‘consciousness’ to someone are behavioural. The possibility of a human-like behaviour is required for saying that someone is conscious or unconscious (there are lots of Wittgensteinian who make this point. See for example, Peter Hacker). Hence, they would reject subpersonal level functionalism as inadequate.

On the other hand, in defence of subpersonal level functionalism, one can say that the unlike personal-level functionalism, this view does not yield a distinction between the subject’s approvals and the subject’s experiences in scenarios like the one of the three robots. So, if it is part of the meaning of ‘seeing red’ and ‘seeing blue’ that an English speaking cogent subject would be able to distinguish between seeing red and seeing blue, personal level functionalism is inadequate.

There may be other reasons for preferring one functionalism over the other (simplicity, for example). Still, this is a side issue beyond the scope of the argument. The point is that if you start from functionalism you don’t get a contradiction/indeterminacy, even in this three robots scenario.

Best

Amit    


2016-10-26
RoboMary in free fall
Reply to Amit Saad
One thing that may be worth adding is that the suggestion that there is no distinction between what it is like to be an individual in a certain situation and how they would react is absurd on common sense grounds. It suggests an isomorphism between experience and behaves which is ridiculous. Our behaviour is always a result of what is perceived at that point TAKEN IN THE CONTEXT of the way the brain is set to respond at that moment. That context will depend on a complex integration of all past experiences. So there has never been the faintest suggestion, whether in a 'functionalist' framework or any other that the two are even remotely comparable. Modern theories of how we perceive and act are all based on the assumption that response is overwhelmingly dependent on the context.
If I am shown a type of biscuit with one pointed end that I do not like in each of twenty different orientations what it will be like to be resented with the biscuit will be different in each case but I will show no interest in any of them. If I am told to say where the pointed end is pointing I may give a different reply in each case but if I am told to say 'bananas' if I see the biscuit point to the left I will behave in quite a different way.

If I am driving on a French road I behave differently faced with the same road as I do in England - I do everything the opposite way around.

The whole idea that what it is like is how you behave is drivel, surely?

2016-10-26
RoboMary in free fall
Hi Jonathan, 

I do not see how your conception is "a version of idealism in which the source of your conscious experiences were unknown" because you posit that the source of conscious experience is a neuron (which you consider in terms of dynamic relations), so you make the metaphysical assumption that neurons (or the dynamic relations that you consider neurons to be) exist and give rise to conscious experiences. I meant a type of idealism akin to Berkeley's but without any attributions with regards to the source of the conscious experience. Berkeley attributes some conscious experiences (such as thoughts) to himself, and others to God. From your comments it seems to me that Leibniz is also making metaphysical assumptions: 

"...he makes it clear that there is a real universe of goings on that is distinct from the point of view that is the subject/monad, even if that universe is ‘reflected within’ the monad. Bascially, it is not an idealism that says that there is no mind-independent reality."

I do not know what you mean by "change" in the block universe model. With presentism change in existence is easy to understand. Consider a presentist model in which the domain of existence is space. What exists at a domain coordinate can change in the sense that if you were to ask what exists at a given coordinate the answer could vary dependent upon when you ask it. In a block universe model the domain of existence is spacetime and what exists at a domain coordinate does not change. So in the presentist model there can be change at domain coordinates and change between them, in the block universe model there is no change at domain coordinates only between them. And as I understand you, you were suggesting that 'you' exist around (I am using the term around to show that I realise that you are not referring to a point) a point in space time, occupying less than a millimeter in any space direction and and about 20 milliseconds in the time direction from your perspective.

You mention "since our brains are not overlapping there will be very little overlap in the domain of non-trivial influence on a ‘you’ and a ‘me’", but there would be 'someone else' (although not 'me') existing at 1 second distance from you from your perspective (assuming the human you were a part of existed at the same space coordinates in that time direction). So what about at 5ms from you? Because in such a case there would be considerable overlap would there not, or were you considering a minimum time distance that the beings would need to be separated by?

Also (sorry here for so many questions but I am still not clear on your position), you mention that you "certainly do not assume that we are in a universe that ‘contains a form in spacetime’", and mention "in Leibniz’s terms space and time are abstractions from the quantitative aspect of dynamic relations" so what are the participants in the relations?  

Regarding my experience of typing on a computer, I was asking whether you were suggesting that the link between the conscious experience and the dynamic relations that were the neuron were symbolic or based on the neurons attributes (presumably its dynamic relations), and you wrote:

"If you experience typing on a computer my suggestion is that a neuron gets an input of potentials that have a pattern that usefully indicates typing on a computer. The neuron could be in a vat, as long as someone plugs in synaptic boutons that will fire the right pattern of signals. Whether in that situation the pattern would be 'useful' is doubtful but that seems obvious, and an unrelated question."

So you did not directly suggest either a symbolic relation or a relation based upon any neural attribute, and while in the first sentence you brought in concepts such as usefulness and whether it is an indicator, in the third sentence those seem to be discarded. Were you clear on what I was meant in the previous post to you about the distinction between whether the relation was symbolic or whether you were suggesting it was a related to some 'physical' attribute (some dynamic relation in your account)?   

Yours sincerely,

Glenn 


2016-10-26
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 

If I was trying to show that functionalism was inconsistent or indeterminate then yes I would have to show that it was inconsistent or indeterminate, but that is not what I was trying to show. Dennett's position is that it has not been shown that if you know how it behaves then you must also know what-it-is-like is incorrect. So I only need to show that there is a difference between knowing how something would behave and knowing what-it-is-like to be it. 

The argument just highlights that people can agree on how it will behave but disagree about what it would be like. Some could think it would be switching between red and blue, some could think that it would be red, some blue, and a theist would probably not think it would be consciously experiencing at all.  Job done. For there to be no distinction it would mean that knowing how it behaves means knowing what it would be like. But if they can all know how it will behave and yet have different incompatible opinions about what-it-is-like then at least some of them knew how it would behave but did not know what-it-is-like, they could not even guess it correctly, and guessing is different from knowing, but I do not need to make that distinction in order to make the argument.
 
If you could tell which was right then you could know, but that does not alter the fact that there is a distinction between knowing how something would behave and knowing what-it-is-like as knowing how something would behave does not require the ability to be able to tell which theory was correct regarding what-it-is-like.

As it happens I believe the argument does also indicate that the sub-personal functionalism (as you refer to it) is indeterminate because, as I pointed out in the last post to you and James, the processing of the A, B, and C channels does not need to vary dependent upon whether which of those channel represented red, green, or blue in order to have a robot distinguishing between those colours. Did you understand my previous post to you and James? If so then I am interested in how were you thinking there would be no indeterminacy. Were you disagreeing with what I wrote, or were you thinking that even with that being the case you still had a method of determining it? 

Yours sincerely,

Glenn




2016-10-26
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I think the consensus is that even Berkley did not believe in that sort of idealism. I think we are drifting off topic now rather. I can answer all your various queries but most of the time it all comes back to forgetting about the jargon that has grown up in recent academic philosophy because it involves such muddled usage of words. The issues around block universes and presentism are not I think as you describe them. You get tied in knots if you stick to intuitive meanings of words but we know that in this area you have to think outside that box. When I listen to philosophy seminars I tend to think people are trapped in words and do not take enough time thinking about how makeshift natural language is.

2016-10-26
RoboMary in free fall
Reply to Glenn Spigel

Isn’t it simply possible for both Mary, and RoboMary, to have a representational knowledge of everything about color?  For example, all her knowledge is composed of representations of ‘red’ that aren’t actually red.  As in the word ‘red’ does not have a redness quality to it, but must be correctly qualitatively interpreted, in order to know what the word ‘red’ means.  So can’t both Mary, and RoboMary, know everything, symbolically about red, yet not yet know how to interpret their symbolic knowledge?  At least not until they experience it for the first time, and only then, know what all their abstract knowledge of everything about red – represents?




2016-10-26
RoboMary in free fall
Reply to Glenn Spigel

 

Hi Derek,

 

You said (or you quoted someone as saying): "every adequate account of what we mean by 'consciousness' would be trivial/circular/uninformative."

 

To me the word "consciousness" is too broad of a term to know what someone is talking about, so I'm going to drop down a level and be a bit more specific.  The only thing "mysterious" is the qualitative nature of consciousness.  So, to be more specific, I'm just talking about qualitative nature of consciousness or qualia.  IE: what where and how is a redness qualia or quale.  Most people assume that qualia are ineffable.  People like Dennett define qualia to be ineffable.  To say that all descriptions of qualitative natures are "trivial/circular/uninformative" is also more or less saying that qualia are ineffable.

A trivial case of ineffability has to do with can you tell if some other conscious being has inverted qualia from yourself.  If you can reliably tell if someone has or does not have inverted qualia from yourself, this will be effectively effing the ineffable.  Once you can do this, you have finally provided a description of the qualitative nature of consciousness that is not circular nor uninformative.

Once you find out what physical process is responsible for or "has" a redness quale, and you know how to observe such in other brains in a way that does not make you blind to qualia, discovering if someone has inverted qualia from yourself (or effing the ineffable) is easy.

 

 


2016-10-26
RoboMary in free fall
Reply to Glenn Spigel

Thanks, Glenn, now I see where you’re heading. Still, I’m not convinced. Let’s talk about necessary a posteriori statements. Consider for example the following:

1.     Clark Kent is Superman.

1 is true. Hence, anyone who knows Clark Kent actually knows Superman. Lois Lane, for example, had known Superman intimately, before she knew that C.K. was Superman. (I obviously chose this Hollywoodian example to annoy Derek ;) )

Now, philosophers may not know that Clark Kent is Superman and provide all sort of theories about who the hell this superman is. Crude functionalists may say that Michael Jordan is Superman. Behaviourists, on the other hand, may say that Glenn is Superman. Formally, they are debating on the true interpretation of the identity statement (propositional function):

A.    X is Superman

Even though they may agree that A yields:

B.    Anyone who knows X knows Superman.

The crucial point which you should notice is that even though there’s such a philosophical debate on A, anyone who knows Clark Kent knows Superman.

I hope the analogy is clear. Dennett suggests the following identity statement:

A’. How one would behave under certain conditions is what-it-is-like to be that person.

Therefore:

B’. Knowing how one would behave is knowing what-it-is-like to-be that person.

Now, in order to reject B’ it is not enough to show that people are debating on the proper identity statement:

X is what-it-is-like-to be Mark-19.

What you need to do, is to show that Dennett’s identity statement (i.e. A’) is false. For if it is true (like C.K. is superman), then B’ would follow (even if there’s a philosophical debate on the proper reductionism of the what-it-is-like clause).

 What do you think?

Best,

Amit

 

P.S.

A. Jonathan, I’m sorry for not responding. Maybe the distinction Dennett makes can be rejected on other ground. Still, I think we should focus on the validity of Glenn’s argument.

B. Derek, thank you for clarifying your position.           


2016-10-26
RoboMary in free fall
Hi Jonathan, 
I agree that the conversation was becoming unfocused, so would you mind if we just focused on the following part of my last post
---
Regarding my experience of typing on a computer, I was asking whether you were suggesting that the link between the conscious experience and the dynamic relations that were the neuron were symbolic or based on the neurons attributes (presumably its dynamic relations), and you wrote:

"If you experience typing on a computer my suggestion is that a neuron gets an input of potentials that have a pattern that usefully indicates typing on a computer. The neuron could be in a vat, as long as someone plugs in synaptic boutons that will fire the right pattern of signals. Whether in that situation the pattern would be 'useful' is doubtful but that seems obvious, and an unrelated question."

So you did not directly suggest either a symbolic relation or a relation based upon any neural attribute, and while in the first sentence you brought in concepts such as usefulness and whether it is an indicator, in the third sentence those seem to be discarded. Were you clear on what I was meant in the previous post to you about the distinction between whether the relation was symbolic or whether you were suggesting it was a related to some 'physical' attribute (some dynamic relation in your account)?   

---

Yours sincerely, 

Glenn

2016-10-27
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 
You wrote:

Now, philosophers may not know that Clark Kent is Superman and provide all sort of theories about who the hell this superman is. Crude functionalists may say that Michael Jordan is Superman. Behaviourists, on the other hand, may say that Glenn is Superman. Formally, they are debating on the true interpretation of the identity statement (propositional function):

A.    X is Superman

Even though they may agree that A yields:

B.    Anyone who knows X knows Superman.


Presumably you would also be suggesting given the following statement 

A''. Superman is X

They may agree that  A'' yields:

B''. Anyone who knows Superman knows X  

I assume the philosophers arguing over who Superman is, is supposed to be analogous to the philosophers arguing over what-it-is-like to be the robot in room 3. But if the philosophers gain the knowledge that X was Clark Kent they would have gained knowledge, likewise they would gain knowledge if they were to come to know what-it-was-like to be the Mark 19. Which I think was the point in the original Mary experiment, that Mary would gain knowledge, a point Dennett was denying.    

Regarding the statement:

A'. How one would behave under certain conditions is what-it-is-like to be that person.
Were you thinking that the meaning would change if it was written as:

A'''. What-it-is-like to be a person is how that person would behave under certain conditions 

With A''', if the 'is' was one of definition, then there would be no knowledge difference between knowing how one would behave under certain conditions and what-it-is-like to be that person. But that would simply be changing the definition of what-it-is-like to be a person. The act of which we have previously discussed.

If (with A' or A''' ) the 'is'  was one of composition then it does not work for Dennett in his argument, as shown in the Superman argument, since that could be true, and yet the philosophers still learn something new if they were to gain knowledge that Superman was Clark Kent, in a similar way to if they gained knowledge of what the Mark 19 in room 3 was actually consciously experiencing. 

Dennett's argument of no distinction required knowledge of what-it-was-like to not be new knowledge for anyone that knew how the robot would act and behave, but it would be for the philosophers that were debating what it would be like. So there is a significant distinction between knowledge of how something will behave and knowledge of what-it-would-be-like.  

Yours Sincerely, 

Glenn




2016-10-27
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

Re: You said (or you quoted someone as saying): "every adequate account of what we mean by 'consciousness' would be trivial/circular/uninformative."

I was quoting someone, as I recall. It’s not my view (and anyway it seems self -contradictory: if the account was “adequate” how could it be trivial/circular/uninformative”?). Every account I’ve seen in the philosophy of consciousness strikes me as “trivial/circular/uninformative” but I don’t rule out the possibility that someone some day might say something worthwhile.

RE: “The only thing "mysterious" is the qualitative nature of consciousness.  So, to be more specific, I'm just talking about qualitative nature of consciousness or qualia.”

I’m sorry, I don’t think this helps at all. If we’re asking what human consciousness is, we are necessarily asking, inter alia, what the “quality” of it is (and what would we even mean by quality in this instance anyway?)

The idea of so-called “qualia” has always seemed pointless to me. It’s just a little piece of jargon to describe … what? I regard the term as simply a handy way of deluding oneself into thinking one knows something one doesn’t. Jargon can often do that.

DA


2016-10-27
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I am afraid I have no idea what symbolic would mean - and not much more on what 'based in attributes of the neuron ' would mean. This sounds like the sort of fictional distinction that people like Putnam and Fodor created without really grasping what the problem was.

2016-10-27
RoboMary in free fall
Reply to Glenn Spigel

Re: “Now, philosophers may not know that Clark Kent is Superman etc etc etc..."

Most of this superman/robot/zombie/stuff comes out of so-called “analytic” philosophy.

Question: Why is it that analytic philosophy, which has so often argued that the continental version has lost contact with the real world, spends so much of its time talking about robots, zombies, superman, brains in vats, and other assorted Hollywood teenage fantasies?  Is that analytic philosophy’s concept of the real world? 

Alas, I suspect it is…

DA


2016-10-27
RoboMary in free fall
Hi Jonathan, 

I had tried to explain quite a few posts ago how I was using those terms  http://philpapers.org/post/22746 but the post was quite long and the conversation was becoming quite unfocused so I will try to explain the terms again. They all relate to how the conscious experience is thought to relate to that which is thought to give rise to it. Which I think is the kind of thing Chalmers is putting under the heading 'the hard problem', which, as I was understanding it, you were suggesting your account gives an answer to. I will use the Mark 19 robot in the original post to illustrate the types of relations I was considering, so if you could just read what I am writing about them, and do not try to think of them in terms of other philosophers conceptions (that is not to suggest that no philosopher has considered them), as that might result in you making assumptions because you had assumed that I was referring to the writings of such and such a philosopher and bringing in things which I had not actually written, which could lead to confusion, especially if you did not explicitly mention that you had done that. I hope you can grasp what I am getting at in each of these, but if not, then please do ask for clarification regarding any confusion you have regarding what I am attempting to convey. 

Contextual Relation: The conscious experience relates to what the underlying (which could be a dynamic interaction of forces) represents given the context. So using the example in the original post the robot would consciously experience red in the first room, and blue in the second, and a switch between red and blue in the third, because that is what processing of the 255.0.0 signal represented in each of those contexts. Same signal in all three cases, and same processing, but a different representation based on context.

Symbolic Relation: The conscious experience relates to the underlying symbolising a certain conscious experience. So what the underlying represents depends on the symbolism. So if the processing of the 255.0.0 symbolised red, then in all three rooms the conscious experience would be of red, if it symbolised blue, then in  all three rooms the conscious experience would be of blue, if it symbolised green then in all three rooms the conscious experience would be of green, if it symbolised a sound or something else then in all three rooms the conscious experience would be appropriate to the symbolism. So symbolic relation is similar to contextual relation in that the processing would represent something, but unlike a contextual relation what it represents would not vary depending on the context, a certain state will always represent or symbolise the same thing irrespective of the context.

Underlying Attribute Relation: The conscious experience directly relates to features of the underlying which could be the dynamic interplay of forces. So if the conscious experience was for example considered to related to the movement of electrons (which could perhaps be thought to reduce to a dynamic relation of forces) then the conscious experience could be thought to perhaps be a brightness depending on the amount of electron movement for example, or perhaps if was suggested to relate to the making and breaking of chemical bonds then that too would be an Underlying Attribute Relation. So it is distinct from either the Contextual or Symbolic Relation in the sense that with the idea of Underlying Attribute Relation the conscious experience would be different if those attributes were different, whereas with the other two quite different underlyings could be thought to represent/symbolise the same thing. 

Regarding your explanation of why I am experiencing typing on a computer you stated:

"If you experience typing on a computer my suggestion is that a neuron gets an input of potentials that have a pattern that usefully indicates typing on a computer. The neuron could be in a vat, as long as someone plugs in synaptic boutons that will fire the right pattern of signals. Whether in that situation the pattern would be 'useful' is doubtful but that seems obvious, and an unrelated question."

So it seemed as though you were going for a representation idea rather than a direct relation to any underlying attribute, because you introduced the idea of indication. It also seems that you were not considering it to be a contextual relation because you seemed to consider that the conscious experience would be the same even if the context was different (as in a brain in the vat context). So it seemed as though you were going for what I termed a symbolic relation. But if you can distinguish between the terms that I used then perhaps you could state yourself which category you felt it your suggestion would fall into, or otherwise offer another distinct category that you felt if would fall into if you felt the ones I offered were not applicable.

Yours Sincerely, 

Glenn    






2016-10-27
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,


You misconstrue the analogy. The question is not whether philosophers know that Clark Kent is Superman. The question is whether they know Superman, given that they know Clark Kent. Let me put it more clearly (following by a formulation).

Mary is analogues to Lois Lane.

Superman is analogues to what-it-is-like-to-be a person.

Clark Kent is analogues to how that person behaves under such and such conditions.

The premises are:

A.      Superman is Clark Kent.

B.      Lois knows Clark Kent

Therefore

C.      Lois knows superman.

Similarly:

A.      What-it-is-like-to-be a specific person is how that person would behave under such and such conditions.

B.      Mary knows how that person would behave under such and such conditions.

Therefore:

c.   Mary knows what it is like to be that person.

Formally, the argument goes as follows (K is a knowledge modulator and a,b are either individuals or propositions):

A.      a=b

B.      Ka

Therefore

C.      Kb

You keep on claiming that we don’t know that the identity statement holds, i.e. we don’t know that what it is like to be x is identical to knowing how x would behave. In an analogy, philosophers don’t know that Clark Kent is Superman. Formally, what you prove by your thought experiment is:

D.      ~K(a=b)

Nevertheless, D is not incompatible with C. Furthermore, C is followed from A and B, even if D is true. Lois knows Superman, even though she doesn’t know that Clark Kent is Superman. Similarly, Mary knows what-it-is-like to be the robot in room 3, even though she may not know that what-it-is-like-to-be that robot is actually equivalent to how the robot behaves. So, Mary (or Lois) doesn’t need to gain more knowledge in order to know what-it-is-like to be that robot (to know Superman).


Yours,

Amit

2016-10-27
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

If you prefer actual examples rather than fictional ones, you can substitute Superman with Lewis Carroll and Clark Kent with Charles Dodgson. Still, as to your question:

Question: Why is it that analytic philosophy, which has so often argued that the continental version has lost contact with the real world, spends so much of its time talking about robots, zombies, superman, brains in vats, and other assorted Hollywood teenage fantasies?  Is that analytic philosophy’s concept of the real world? 

I believe the answer is positive.

Best

Amit

2016-10-27
RoboMary in free fall
Reply to Glenn Spigel
Sorry Glenn but I cannot make much of any of that. I am unclear what you mean by relation. I know what people mean by a causal relation but am unclear about other sorts. I do think this is a philosophers' presumption here. Philosophers often talk about relations like a relation of reference and that seems to me to be a concept more to do with the way we point other people to ideas with language than any relation in the world as such. These are the problems, as I indicated, that generate the confusing concepts of broad and narrow content. I think one should stick to causal relations in the way science does. 
As far as I can see all we need to understand consciousness is the concept of causal, or dynamic, relation, the acceptance of multiple points of view, each being a protagonist in the network o f dynamic relations and the assumption that phenomenality is only ever entirely proximal, as part of the direct relation to that point of view. 

Meaning is a rather different problem and one has to take into account two sorts of meaning - meaning by and meaning to. That reflects the fact that in general terms what we mean by meaning is both useful co-variance with antecedent (perceived) events and useful co-variance with consequent (behavioural) events, considered in the relevant context. Brains handle meaning well because they are designed to self-adjust to a state where both sorts of co-variance match up. A confusing implication of this definition of meaning is that it is about the way repeated experiences and actions co-vary with the outside world. I tells us nothing about the matching of any individual internal sign or symbol to the outside world. So in a sense to say that some internal event associated with an experience means the dog is by the fire is not to instantiate at that time any specifiable relation of anything much to anything. My suspicion is that in the RoboMary examples there is no real answer to any of these questions. Even to start to make sense we would have to stipulate in which semiconductor unit the experience was instantiated and attribute the experience to that unit, not to RoboMary. The whole exercise is founded on sand - at least I agree with Derek about that.

2016-10-27
RoboMary in free fall
Reply to Amit Saad

error - deleted


2016-10-27
RoboMary in free fall
Reply to Amit Saad

Hi Amit

RE: ‘If you prefer actual examples rather than fictional ones, you can substitute Superman with Lewis Carroll and Clark Kent with Charles Dodgson.

Not sure how that would help. Let’s take David Chalmers’ proposition re “zombies” for example. He writes:

“Zombies are hypothetical creatures of the sort that philosophers have been known to cherish. A zombie is physically identical to a normal human being, but completely lacks conscious experience. Zombies look and behave like the conscious beings that we know and love, but "all is dark inside." There is nothing it is like to be a zombie.””

That would become (if we use the Lewis Carroll option for example)

A Lewis Carroll is a hypothetical creature of the sort that philosophers have been known to cherish. A Lewis Carroll is physically identical to a normal human being, but completely lacks conscious experience. A Lewis Carroll looks and behaves like the conscious beings that we know and love, but "all is dark inside." There is nothing it is like to be a Lewis Carroll.

Do you feel that gets us anywhere?  In my view, it makes what was already nonsense into even worse nonsense.

RE: My question: "Why is it that analytic philosophy, which has so often argued that the continental version has lost contact with the real world, spends so much of its time talking about robots, zombies, superman, brains in vats, and other assorted Hollywood teenage fantasies?  Is that analytic philosophy’s concept of the real world? "

Your answer: "I believe the answer is positive."

So you agree with me that analytic philosophy – or at least large areas of it – lives in a juvenile fantasy world?  Do you think that’s a satisfactory position for a school of philosophy, largely based in publicly-funded universities, which presumably justifies its existence by claiming it is relevant in some way to the world of real human concerns?   

 DA 


2016-10-28
RoboMary in free fall

Hi Jonathan,

Do you have an understanding of what the word “correlate” refers to in the term the Neural Correlate of Consciousness?

Could you for example make sense of the following:

"A science of consciousness must explain the exact relationship between subjective mental states and brain states, the nature of the relationship between the conscious mind and the electro-chemical interactions in the body (mind–body problem)."

https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness#Neurobiological_approach_to_consciousness

As a side issue I do not agree with you that the whole exercise is founded on sand, it seems to me that we can build on solid ground. Personally I find it hard to believe that Derek does not understand what others mean by consciousness, and that he cannot imagine what atheists imagine death with no after life to be like (personally I can find no ambiguity in it), and that after stating that he could understand the brain-in-a-vat scenario that he could not after all, and that neither could he understand what was meant by the question of whether he was aware of anything at all. Unless all he meant was that he could not understand what it meant in terms of scientific experiment (as though that was the scope of his understanding). As far as I am aware Einstein started that type of thinking off (and some might think it clever to think of it like Einstein), but he still understood the difference between hidden variables (which could have been non-local indicating spooky action at a distance) and randomness. Derek also seems to have a problem understanding the utility of thought experiments, but the EPR thought experiment led later to experiments which as I understand it suggested spooky action at a distance.

Yours sincerely,

Glenn


2016-10-28
RoboMary in free fall
Reply to Amit Saad

Hi Amit,

I think you might not have understood my previous reply to you.

You state that:

Mary is analogues to Lois Lane.

Superman is analogues to what-it-is-like-to-be a person.

Clark Kent is analogues to how that person behaves under such and such conditions.

The premises are:

A.      Superman is Clark Kent.

B.      Lois knows Clark Kent

Therefore

C.      Lois knows superman.

As I touched upon last post there is a distinction between an identity of composition and an identity of definition. You seem to have ignored that, because you did not mention the distinction in your response, by clarifying which way you meant it for example. So I will go through the statements again, showing how the distinction matters.

If A is an identity of definition then as I pointed out last post that would simply be changing the definition given that you are considering Superman to be analogous to what-it-is-like-to-be a person.  

If however A is an identity of composition, then it does not imply that the features associated with the sense of the composition being Superman are the same as the features associated with the sense of the composition being Clark Kent.

If in B you meant that Clark Kent denotes a certain composition, and that Lois is acquainted with that composition then C does follow because Superman denotes the same composition that Lois is acquainted with.  But that does not imply that Lois knows that the composition has the features associated with the sense of the composition being superman (can fly, super strong, etc.).

If however in B if you mean that Lois knows the features associated with a sense of the composition being regarded as Clark Kent (works as a colleague reporter, what he looks like with glasses on etc.) then if you had a similar meaning in C, C would not follow, as B does not imply that Lois knows the features associated with the sense of the composition being Superman (can fly, super strong, a susceptibility to kryptonite etc.).

Thus when significant identity distinctions exist as illustrated above, I think you need to be clear which type of identity you are referring to in the equations. Otherwise the equations can appear to be suggesting one thing, while intuitively you can understand that the philosophers would be learning something if they were to learn the answer to what they were in disagreement about. Hopefully the unpicking of the equations with the distinctions in identity types has helped highlight the issues. I disagree that the idea that the thought experiment was just showing

D.      ~K(a=b)

In terms of an identity of definition a != b and in terms of an identity of composition a = b can be assumed by the philosophers, but that does not imply that knowledge of the features associated with a sense of 'a' implies a  knowledge of the features associated with a sense of 'b'. So they could assume a = b in terms of identity of composition and know the features associated with the definition of 'a', but not know the features associated with the definition of 'b', and so learn something new when they learn the features associated with the definition of  'b'  when they did not previously know them (having previously only known the features associated with the definition of  'a'), as was intuitively obvious. Thus Mary would learn something new when learning what the experience of a certain colour was like (even if it was just that her guess was right). It was the distinction between knowing the features associated with a sense of how a person behaves and knowing the features associated with a sense of what-it-is-like that Dennett was not honouring.

Yours sincerely,

Glenn



2016-10-28
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I fully understand what people mean by neural correlates of consciousness. But I am not quite sure how that links up to our previous discussion. Maybe the idea is that to correlate is to have a relation but I am not sure that it works like that. The sequence ABCDEFG may seem to correlate with abcdefg but that does not imply any specific relation between the A and the a. This is the sort of situation where I think philosophical talk often throws up false arguments.

My way of thinking has virtually nothing in common with Derek's, which seems to me the epitome of academic philosophical confusion. But I agree with him that twentieth century analytic philosophy of mind is built on sand - it is just not quite as bad as the continental version that is built on woo-woo.

My reason for saying that virtually all twentieth century philosophy is built of sand is that if you read Locke and Leibniz's reply in New Essays you find all these Mary confusions are completely unnecessary.They arise from not getting clear what your words mean in certain contexts and from being detached from practical science.

2016-10-28
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

You said: "we are necessarily asking, inter alia, what the 'quality' of it is (and what would we even mean by quality in this instance anyway?)"

OK, to answer this question let's take two identical people looking at a red strawberry with green leaves.  For the second person you add a red green inverting system in the perception process.  This could be a red green inverting camera system to look at the strawberry with.  The camera is observing the same strawberry, but on its screen (which the second person is looking at the same strawberry through) red and green are inverted.  In other words, on the red green inverted screen, the strawberry is green and the leaves are red.  Also, the inverting system could be added any other place in the flow of the perception process between what is being observed and the final knowledge.  In other words, it could be inverted in the optic nerve, after the eye instead of using a camera inverting system before the eye.

Obviously, there are two different important qualities to consider here.  The type of light being reflected quality of the initial case of the perception process, and the differing qualities of the final result of the perception process, or the qualitatively different knowledge of the strawberry of the 2 people looking at it.  The "red" quality of the strawberry (i.e. reflecting 650 NM light) is a quality of the initial cause of the perception process and the different "redness" or "greenness" qualities of the final result of each different perception process is the one that is important.  So we need to have a different name for these different qualities of knowledge - we use the different term qualia.

So, obviously, these two people both have knowledge of the same strawberry that is qualitatively very different.  The resulting inverted qualitative difference of their knowledge is what is important.  These different final resulting qualities are the answer to your question: ("what would we even mean by quality in this instance anyway ")

Brent

2016-10-28
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

I confess I had to force myself to read through your post. The idea that something important can be said about the nature of human consciousness by “seeing red” or seeing red as green, etc etc strikes me as absurd. (Good grief! How many thousands of times have philosophers of consciousness gone on about this?!!)

First, it tells us nothing whatsoever about human consciousness specifically. Many animals – even relatively non-complex ones – see colours (even various birds, insects, and fish). Certain IT gizmos can as well. And doubtless all of them can be tricked by “inverting systems” if need be.  

Second, as the above implies, this is a simple issue of perception. You say: “These two people both have knowledge of the same strawberry that is qualitatively very different.” Knowledge?  They simply see different colours. “Knowledge” is a loose term but it readily suggests something far more significant than the mere experience of perceiving colours. And the only “qualitative” difference in question is the colour perceived. A fish or an insect could presumably see this same “qualitative” difference – and have the same “knowledge”.

In short, your example is a very slender thread indeed to hang the meaning of “qualia” on. As I’ve said, I think "qualia" is a useless bit of jargon that serves only to confuse – as jargon so often does. I earnestly recommend forgetting it – along with examples about “seeing red” etc.

DA

2016-10-29
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

Well, obviously I didn’t mean an identity in the sense of definition (this is why I used the equality sign, rather than the definition sign ‘=df’). I think that this option (identity in the sense of definition) is just a red herring.

What I mean by identity is a binary relation that is symmetric, transitive and reflexive. You call it an identity of composition. That seems to me an unnecessary qualification, but I don’t mind using your terminology. As you said, under this qualification A and B do entail C. This means that Lois knows superman, and similarly RooboMary/Mary knows what it is like to see ‘red’. Now, you argue that there’s something which they don’t know, which is the “features associated with the sense of the proper name ‘Superman’ and those associated with the expression ‘what-it-is-like’.” (the claim you make is somewhat inaccurate. What Lois doesn’t know is not the features, but rather that these are features of Clark Kent). Anyway, your claim relies on a theory of ‘sense’ and ‘reference’ of proper names (and of expressions). You should notice that:

1.       The theory of sense and reference is a questionable linguistic theory. I don’t think that it was assumed that Mary Knowledge Argument relies on such a linguistic theory (at least, I don’t recall Jackson made such a claim).

2.       If what-it-is-like is identical with a person’s behaviour, so every feature of what-it-is-like-to-see-red is a feature of that behaviour. Now, if we can describe the features of behaviour merely in physical terms, so we should be able to describe the features of the what-it-is-like-to-see-red merely in physical terms. This would be a refutation of the conclusion of the Knowledge Argument.

For these reasons, I don’t see how you save the Knowledge Argument from Dennett’s criticism.

-------------------------

Still, I wish to stress that the Knowledge Argument is not our main issue here. The main issue is your Three Robots Argument, which aims to show that there’s a distinction between (a) what-it-is-like-for-the-robot-to have a certain experience, and (b) how the robot will act under such-and-such conditions. Formally, you need to prove:

1.       ~(a=b)

Now, as you said before, the argument (if it works, which is questionable) shows that physicalists would not know how to form a proper reductionism of a to b, i.e. they won’t know how to decide which reactions of the robot determines what-it-is-like-for-it to have a certain experience. Thus, they will have different claims (guesses?) about the experience of the robot in room 3. Formally, you show that

2.       ~K(a=b)

(K is a knowledge operator)

The lacuna which I find in your argument is that I don’t see how 2 entails 1. (There is another problem with the argument—I don’t think that your proof of 2 is valid, but we can leave this claim for now). So, please explain how you get 1 from 2.

Now, you can say that we’re talking about omniscient physicalists. In this case, 2 does entail 1 (this is how, I believe, you should have replied to my criticism on the knowledge argument). Still, your argument (and maybe here I get to your proof of 2) is based on claims regarding our knowledge of programming today (you made several claims like: “On what basis could a functionalist declare 255.0.0 not to be blue? Can you not see that any argument made that it should be considered as red could equally be made that it should be considered as blue, as neither require any difference in processing. The idea that there need be some indicator in the way the channel is processed is false, at least in the computers we use.”).  Unfortunately, we are not omnisicient physicalists.

This is in fact Dennett’s point. These thought experiments rely on our intuitions. Intuitively, we think that no physical theory would settle this discussion, but we are not omniscient physicalists, and our intuitions may be incorrect. So, at the moment the K operator does not represent the knowledge of an omniscient physicalist.

P.S.

Notice that Dennett did not argue that there is such an identity between experiences and behaviour, i.e. he does not argue for:

3.       a=b

All he says is that 1 has never been proved and defended. So the burden of proof is on you.

---------

Finally, I wish to stress that I find your argument interesting, and I don’t want to discourage you from publishing it. All I try to do is to mention some points that I find problematic, so you can provide a stronger argument eventually.

Best wishes,

Amit    


2016-10-29
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

Re: “Personally I find it hard to believe that Derek does not understand what others mean by consciousness,

Which “others” did you have in mind, Glenn? What do they mean by consciousness? Happy to discuss their views ...

RE: “… and that he cannot imagine what atheists imagine death with no after life to be like”

I have strictly no idea what “no after life” is “like”. In fact, I think it’s absurd even to use the word “like” in this context. To compare two things one needs to have at least some knowledge of both. Like everyone, I have some knowledge of life; I have absolutely no knowledge of what no (after) life is. Neither does anyone – yourself included. Again, think of Hamlet's wonderful line: “that undiscovered country from whose bourn no traveller returns”. (In general, I get the impression that modern philosophers –  especially those from the “analytic” camp – read far too little good literature. On the other hand, comic books and third-rate Hollywood movies seem to be staple fare…)

Re: “...that after stating that he could understand the brain-in-a-vat scenario that he could not after all, “

You’re being a mite disingenuous here, Glenn. I explained to you that like anyone I understood the (ridiculous) brain in vat scenario. (A ten-year-old could understand it.) What I do not understand is how anyone could think that it could make a serious contribution to a philosophical discussion of human consciousness (or indeed of anything).

Derek also seems to have a problem understanding the utility of thought experiments,”

I sure do. "Thought experiment" strikes me as a silly, pretentious phrase analytic philosophers have dreamt up to give some semblance of legitimacy to their penchant for juvenile fantasising (e.g. about brains in vats, “zombies” etc).

And it’s so typical that they would (mis)use the term “experiment” – once again hanging onto the coat-tails of science.

DA


2016-10-30
RoboMary in free fall
Hi Jonathan, 
Yes did bring the term neural correlate of consciousness up, because you seemed to be having trouble understanding what I meant when asking how the brain activity related to the conscious experience. Which is also why I asked:
---

Could you for example make sense of the following:

"A science of consciousness must explain the exact relationship between subjective mental states and brain states, the nature of the relationship between the conscious mind and the electro-chemical interactions in the body (mind–body problem)."

https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness#Neurobiological_approach_to_consciousness

---

But I was not clear whether you were answering in the affirmative that you do understand what they meant by the mind–body problem. If you could then why were you thinking I was using the idea of there being a relationship between them any differently from in the quote?

Regarding the two series you gave, yes there seems to be a correlation between them, because there seems to be a relation between them, and a relation between A and a, same letter different case. I could imagine one of those so called intelligence tests stating:

Series 3 relates to Series 4 in the same way that Series 1 relates to Series 2, what is Series 4?

Series 1: ABCDEFG
Series 2: abcdefg
Series 3: XYZ
Series 4: ?

I am meaning relate in the sense given in the example. That if you know "blah", and know how it relates to "blahblah" then "blahblah" can be determined from "blah" and the relationship it has to "blahblah". I realise that is pretty rough, and you could point out that knowledge of "blah" and the relationship does not imply the ability to determine "blahblah", but why would you, since presumably it was just that you were not clear what I meant, not that you were being evasive. The categories I supplied were just rough categories, the distinction between them pointed out (maybe unclearly I do not know). Is there now any reason why you cannot explain the rough relationship between subjective mental states and brain states using the categories I supplied, or a new category if you felt that your idea did not fall into any of those? 

Yours sincerely, 

Glenn

2016-10-30
RoboMary in free fall
Reply to Amit Saad
Hi Amit, 

Thank you for the encouragement, and after we have finished with this matter of Dennett, I would like to return to the more serious features (wider implications than whether Dennett once wrote something silly)  of the thought experiment. You write:
---
Now, you argue that there’s something which they don’t know, which is the “features associated with the sense of the proper name ‘Superman’ and those associated with the expression ‘what-it-is-like’.” (the claim you make is somewhat inaccurate. What Lois doesn’t know is not the features, but rather that these are features of Clark Kent). 
---

I think I misunderstood the analogy. I do not think the idea is that Mary knows the colour features associated with what-it-is-like but had not assumed it shared a composition identity with brain behaviour. I think the idea is that she knows that what-it-is-like can have a feature called "blue" but she does not know what the feature is, though she assumes it shares an composition identity with brain processing involved with a person claiming that the sky is blue for example. I had assumed it was similar in your analogy, as you had no premise suggesting that Lois knew the features of Superman. I assumed by her knowing Clark Kent you meant she knew that she was acquainted with the composition that had the features associated with the sense of the composition being Clark Kent. Though you claim that 

A.      What-it-is-like-to-be a specific person is how that person would behave under such and such conditions.

is not a definition identity, I think you may not have understood what I meant by a definition identity, and thus understood the distinction. By definition identity I just mean that both refer to the same features. So if a person were to claim that if a person said they were experiencing blue, what they meant by blue is the what-it-is-like feature that they refer to as blue, not the features that Mary was aware of that Mary assumed shared a compositional identity with that feature. Because they do not share a definition identity, because they do not refer to the same features. The person need not have had a clue about those other features that Mary was aware of. A claim that they are the same features is just changing the definition, which we have discussed.

So consider the Morning Star and the Evening Star. You could know that the Morning Star has the feature of being seen in the morning but never have heard of the Evening Star and not know it had the feature of being able to be seen in the evening (this is just a rough analogy and could be expressed better) . 

So using your reasoning in relation to a claim that there is no distinction between knowing the features of the Evening Star and knowing the features of the Morning Star.

a.      The Morning Star is the Evening Star.

b.      Mary knows the Morning Star

Therefore:

c.      Mary knows the Evening Star

Clearly this does not imply that Mary knows the feature of the Evening Star that it can be seen in the evening.

Now you could write it as you do:

A.      a=b

B.      Ka

Therefore

C.      Kb

Still that does not mean that Mary knows that the Evening Star can be seen in the evening. And the problem is not 

D.      ~K(a=b)

Because let us imagine that someone tells Mary that the Morning Star is also known as the Evening Star but omitted to tell her that it can be seen in the Evening. 

I am going to use the term f to mean the features associated with the definitional sense, so fa would mean the features associated with a:

A. a = b

B. K(a=b)

C. K(a)

D. K(fa)

Therefore 

E. K(b)

But not 

F. K(fb)

As you can see with just a compositional indentity K(fb) is not known. The danger of not making the distinction is that you can conflate the two identities and end up thinking of K(b) as meaning K(fb). You would get K(fb) if you were to go

A. fa = fb

B. K(fa = fb)

C. K(fa)

Therefore 

D. K(fb)

And from there I can see how you could be stating that all I had done was show that they did not know premise B, but actually the issue would be the A (which is what I am referring to as a definitional identity) would be false regarding the features that what-it-is-like usually refers to, and if it was to be regarded as true it would just be a change of definition. Like for example having a defining feature of the Evening Star being that it can be seen in the morning rather than in the evening, or maybe worse suggesting that it being able to be seen in the evening means it can be seen in the morning (so changing what having the feature of being seen in the evening means) .    

Dennett is effectively stating that no distinction has been shown between Kf(a) and Kf(b). Because that is the distinction that Robinson was talking about, and an assumption that Dennett was making (assuming he was not changing the definition of what-it-is-like be a person to no longer refer to features of what it was like, but to refer to features observable from a third person perspective).      

But consider 4 philosophers who know how the robot in room 3 will behave:

A. Theorises it would see red.

B. Theorises it would see green.

C. Theorises it would see it change between red and blue but not notice, and if the robot brain had been in a vat and the input signals had come from a synthesizer and the outputs had gone to light up fairy lights then it would have experienced something appropriate to that context, but not noticed at least in the sense that whatever it was consciously experiencing it would not make any difference to the processing and the outputs.

D. Theorises that it would not consciously   

They can know how the robot will behave from a third person perspective, but clearly cannot all know what it would be like to be it.  

Yours Sincerely, 

Glenn





2016-10-30
RoboMary in free fall
Reply to Glenn Spigel

Dear Glenn,


We need to be much more careful. In my example I mentioned a relation between the A and the a, that is to say an actual relation between tokens. It might have been that the A was in the line above and to the right and as such would have been merely a chance spatial relation. We need to distinguish actual relations between tokens, and conceived rules of correspondence between types.

 

Both of these are relevant to understanding phenomenal experience but the quote you give seems to imply it worries about the first in ‘the nature of the relationship between the conscious mind and the electro-chemical interactions’, although it is ambiguous.

 

If we consider a particular actual electro-chemical interaction and the quale of red that we think might go with it for some dynamic element in this interaction then the question looks to be illegitimate. It is really the question ‘what other known category of relation does this one fit into’. When we ask what is the nature of something we are really asking for another example of the same class that we understand and can use as a pointer. We have no other source of information about the ‘nature’ of anything. What seems clear is that there is no other category of relations that the relation of physical dynamics to manifest experience for a subject falls into. Descartes covers this well. He points out that any complete theory of physics has to posit this as a relation that has no equivalent anywhere else in the theory. It is a brute fact of the direct impingement of proximal dynamic influences on a subject unit that they are manifest to it as experience. Period.

 

So in this sense asking for the nature of the relation might be considered totally illegitimate. However, it may be important at least to specify that it is a direct influence on a subject. Most present day neuropsychological theorists appear to throw away even that and expect experience to arise from complex interactions between large numbers of distributed elements. At least in that sense we have a legitimate question.

 

If we asking about relations between types these are not actual relations within a dynamic theory but rules of correspondence. For capital and small letters the rules are pure convention and inconsistent. For brains and experience we might expect some more consistent rules of correspondence based on natural laws or what Leibniz calls sufficient reasons. But we have two problems. Firstly, no one experience can be compared with another, so we have no way of establishing the truth of any statement about consistency. Secondly, when we talk of the quale of redness we are defining it by a functional relation to dispositional properties of the distal world. If different nerve cells have different functional relations to those properties, because they do different jobs, then each may host a different quale despite all being equally describable as redness. So for specific qualia we are pretty stuck on this one. 

What may be more encouraging is that we may be able to say that a dispositional property of the outside world that has n perceivable degrees of freedom has to give rise to a sense of that number of degrees of freedom through a dynamic relation with at least that number of degrees of freedom. There are lots of riders to that but it may provide at least some basic grounding for knowing where to look for the events that are manifest as qualia. For a panoramic view we seem to need to look for some relation of input to a unit with maybe a hundred degrees of freedom as a minimum. And each degree of freedom will need to be controlled by a separate neural signal. That gives very few options for where the relation can be. It has to be pretty much what Descartes suggested – tiny movements of subtle fluids in hundreds of nerve fibres influencing some receiving element in a non-billiard ball fashion. In other words post synaptic integration.


Best wishes


Jo

 

 

 


2016-10-31
RoboMary in free fall
Hi Jo, 

With the A and the a I thought you were just talking about the ones in the series. And sure the relation could be different from obvious one such that the answer to series 4 could have been IkJ because the relation might have been a spatial one in a piece of text. As I pointed out though by relation I only mean that given X and its relation to Y, Y is determined. So when asking for the relation between X and Y, I am just asking for a way of determining Y given X. 
You wrote:

"But we have two problems.Firstly, no one experience can be compared with another, so we have no way of establishing the truth of any statement about consistency. Secondly, when we talk of the quale of redness we are defining it by a functional relation to dispositional properties of the distal world. If different nerve cells have different functional relations to those properties, because they do different jobs, then each may host a different quale despite all being equally describable as redness."

Regarding the first problem that seems like a non-issue because I am just asking for your theory, and I would only plan on evaluating the idea through philosophical enquiry.  Regarding the second problem you write: "Secondly, when we talk of the quale of redness we are defining it by a functional relation to dispositional properties of the distal world" but who is "we", functionalists? Are you saying you are a functionalist and therefore think that the relation I am enquiring about is a functional one? I am assuming you agree that functionalists believe there is a functional relation between the activity and what is consciously experienced.

Could you let me know what you think the robot will experience in room 3 and whether you are suggesting it would matter if it used analogue logic gates to achieve the behaviour? Could  you please answer this last question, even if you overlook answering the others.

Yours Sincerely, 

Glenn





2016-10-31
RoboMary in free fall
Reply to Glenn Spigel

Dear Glenn,

You still seem to have missed my point about the difference between relations between tokens and relations between types. (The discussion is helpful for me because I think it answers a question I had. Token entities relate via causes. Types relate by reasons (in Leibniz’s sense). I had been thinking reasons were sort of the type for token causes. However, that would be wrong because the type for token causes is just the type: 'token causes'. So the difference between reasons and causes is not that one is a type and the other a token but that they respectively link types and tokens.)

 

It sounds as if you are just asking for the rules of correspondence, which is probably not what the guy in the quote wanted.


I don’t think it can be a non-issue if in principle there is no fact of the matter about the answer to a question. Only planning to evaluate through philosophical enquiry does not help. And philosophical enquiry as practiced recently tends to make serious blunders like confusing arguments about tokens and types and asking what a person or a robot experiences.

 

I am not taking a functionalist position when saying we have to identify a quale of redness by its causal relation to the outside world. In the absence of any way of comparing what qualia are like for different subjects that is the only way we can define it I think. I admit that I should have perhaps made it clear that I was using redness in the usage of an external dispositional property like preferential reflection of long wavelength light rather than the usage of a phantasm of redness.

 

Admitting we have to use a functional definition does not lead to functionalism as I understand it because the functionalist claims that where functional relation to the world is the same the qualia are the same. This is a useless hypothesis since there appears to be no fact of the matter whether or not they are the same. The other problem with functionalism is the one I mentioned before. In true philosophical style it conveniently ignores the point made by Bill Seager that for any dynamic relation there are an infinite number of levels of functional description and nobody says which one we should choose. The obvious one is the infinitely fine grained (i.e. with no limit to grain) immediate relation (since experience is most closely correlated with immediate or proximal relation) in which case you effectively have an identity theory and it is confusing to call it functionalism.

 

As for your final question, I think you already know what my answer is. Why should we credit a robot, a whole robot, with experience? What is the boundary of a robot? Does it include the online connection to GPS? Does it include the rubber bumpers on the feet? The same problem applies to ‘persons’. What is a person? We know what a human body is and we have reason to think each one is associated with at least one experiencing subject but why should that be a ‘person’. Descartes is to my mind much more sensible on this than people these days.

 

The question would be is there some dynamic unit within a robot that experiences a colour? In functional dynamic terms I think we can be pretty sure that there is no dynamic unit in a robot that relates to the world in the way that a human subject does – not in a remotely similar way. If the unit in the robot is a semiconducting gate it will only have one degree of input freedom. A neuron has 10,000. So I am unclear whether the question has an answer. In terms of whether the quale is the same as the one your human subjects think is familiar to them, as said, there appears to be no fact of the matter.

 

So I fear the whole story is chasing an Oroborular tail. There is a very interesting question lying in the wings about what local biophysical dynamic relations are associated with the experiences we have and how they might be structured in terms of degrees of freedom and orthogonality etc. But the original motivation for the Mary story is based on a muddle of words, like 'physical fact', whatever that might be – which may be why most people have lost interest.

 

It is time for a paradigm shift.


Best wishes


Jo


2016-10-31
RoboMary in free fall
Reply to Derek Allan


Hi Derek,

You said: "They simply see different colours. 'Knowledge' is a loose term but it readily suggests something far more significant than the mere experience of perceiving colours. And the only 'qualitative' difference in question is the colour perceived. "

Yes exactly!  We are both talking past each other, and talking about the same thing, when you say the only difference is the colour perceived.  As I indicated, the inversion could happen inside the head, or after the eye, say in the optic nerve - or even in the eye itself.  This could happen naturally at or before birth - due to a genetic mutation in the eye, or something.  The inverted person, for his entire life, never knows anything different than, as you say "perceiving a different color" from the non-inverted person.  When they both look at the strawberry, they will both call it "red" (what both their moms taught them to call it) - even though the inverted person is as you say "perceiving a different color."  When someone asks: what is it like, for you to perceive red he is asking if you have an inversion in your eye, or something, so the person is as you say: "perceiving a different colour." or not.

Brent Allsop

2016-10-31
RoboMary in free fall
Hi Jo, 
So is your answer is that there is no relation between human neural activity and what is consciously experienced, such that if you knew the how they related and knew the relevant neural activity you could (assuming you had the ability) work out what the conscious experience was, or was it simply that you are not denying such a relationship but do not know what it is or even what type of category it would fall into (I had supplied a few options)?

I do not see how type vs token is of any importance with regards to human neural activity  unless you are assuming the relation would not be a general relation across all tokens of the human type, and would instead be on a token by token basis. At least the way I am using the word relationship. Which might be confusing especially since I did introduce a quote which had used the idea of the nature of the relationship, which could be interpreted as referring to more than just a mapping, which is not what I was meaning. As I think we had discussed earlier, physics is about the relationships not the nature of the relationships, that tends to be a metaphysical discussion.

With regards to the robot, I am assuming that you are stating that you have no idea what type of activity would be associated with a conscious experience similar to ours rather than suggesting that I should have asked "Could you let me know what you think any part of the robot will experience in room 3 and whether you are suggesting it would matter if it used analogue logic gates to achieve the behaviour?"

As long as you are not plumping for any given relation category (though you seem to be rejecting functionalism)  and not stating whether you can see any option other than ones falling into the relation categories that I supplied (I think I have now adequately explained what I meant by a relation in this, the mapping which would allow the what the conscious experience was like to be determined), then I cannot point out any problem with your suggestion. Though that does not mean that I think that is a problem, because I think it is obvious that it is a Symbolic Relation as I described and I think bionic eyes will support that, and I think you were suggesting that (even if you did not realise what I had meant by a Symbolic Relation) when you suggested the mapping from the neural activity to the conscious experience would be the same in a brain-in-a-vat. That raises the question or why should an undesigned universe have a symbolic relation that would have just happened to have been one that would have been suitable for a spiritual being having a spiritual experience to base its choices on. That that does not seem to be what many philosophers had thought does not matter, but I do agree that it is a time for a paradigm shift.

Yours sincerely, 

Glenn


2016-10-31
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

RE: Yes exactly!  We are both talking past each other, and talking about the same thing, when you say the only difference is the colour perceived. 

But so what, Brent?  As I pointed out, the same could presumably happen in the case of a fish or an insect.  You’re talking about an issue of mere perception. And the same kind of thing could presumably occur in the case of olfactory or touch organs (especially since you say that “this could happen naturally at or before birth” – a bee for example that gets its smells wrong for genetic reasons).

You have a very, very long way to go to establish that any of this has any significant bearing on the question of human consciousness.  Unless you think human consciousness is no more difficult to understand that the sight mechanisms of (e.g.) a fly.

In short, I think it's highly unlikely that we're "talking about the same thing". 

DA


2016-11-01
RoboMary in free fall
Reply to Glenn Spigel
Hi Jo, 

Just writing again as I do not think it was clear when I wrote:

With regards to the robot, I am assuming that you are stating that you have no idea what type of activity would be associated with a conscious experience similar to ours rather than suggesting that I should have asked "Could you let me know what you think any part of the robot will experience in room 3 and whether you are suggesting it would matter if it used analogue logic gates to achieve the behaviour?"

that I was not suggesting that you had no idea what type of activity would be associated with a conscious experience in a human (you have already mentioned what you thought was associated with it), I just mean whether in a robot analogue logic gates would be suitable for example. Or whether if an alien had a cells of a dramatically different structure for example, any part of it could have a conscious experience similar to yours. I think the robot highlights (at least part of) what Chalmers was referring to as "the hard problem". 

Yours sincerely, 

Glenn



2016-11-01
RoboMary in free fall

Hi Jo,

You said:

"In functional dynamic terms I think we can be pretty sure that there is no dynamic unit in a robot that relates to the world in the way that a human subject does - not in a remotely similar way"

I'm having troubles understanding what you mean by "relates to the world in the way the human does".  And I don't understand how differing degrees of freedom (or differing resolutions?) matters.  Given the following, let me know if I do understand what you mean by "relates to" after all?

It simply seems to me that an abstracted word like "red" does not have a redness qualia.  But the word "red" can relate to a redness qualia, if you know how to interpret it qualitatively correctly.  Why can't a robot relate to "redness" the same way the word "red" does - by using abstracted symbols that aren't actually red?  It can describe and know everything about it, as long as you know how to properly interpret its abstracted knowledge.  While, we as qualitatively conscious humans, use the real redness quality to represent red - so for us, there is no interpretation required - there is no "relates to" it just is.  Is not that the simple functionalist fact of the matter describing this difference between a robot and a human?

Brent Allsop

2016-11-01
RoboMary in free fall
Reply to Derek Allan

Derek said: "As I pointed out, the same could presumably happen in the case of a fish or an insect."

Yes

Derek said: "You're talking about an issue of mere perception."

Yes, for the trivial case - but once you understand this, the general qualitative idea covers everything about the so called "hard problem".

Derek said: "And the same kind of thing could presumably occur in the case of olfactory or touch organs (especially since you say that 'this could happen naturally at or before birth'"

Yes

Derek said: "a bee for example that gets its smells wrong for genetic reasons"

Yes, but, I wouldn't say "wrong" in most cases since the intelligent behavior is also inverted i.e. the intelligent behavior is indistinguishable.  The actual color being perceived is the only thing changing.  For example, let's say the "redness" you perceive red with is like the greenness most people experience when they see green, due to your "genetic defect" in the eye.  Just because your eyes are the minority, doesn't make it a "defect".  We all still behave the same - all calling it red.

Derek said; "Unless you think human consciousness is no more difficult to understand than the sight mechanisms of (e.g.) a fly"

Yes, that could very well be the case.  But we can't qualitatively perceive this, when we look to see if there is any redness and greenness being perceived in a fly's brain.  This is because we are qualitatively blind.  I.E. we interpret everything abstracted we detect, according to the quality of the initial cause of the perception process.  This is what makes us qualia blind, since it "corrects" for any so called "wrong" interpretation (as you are doing) in the eye or brain, leading us to believe we all experience "red" the same.  While if we'd interpret things correctly, according the qualitative nature of what we are actually perceiving, or the final results of the perception process, we would no longer be qualia blind - the so called "hard problem" solved.

Derek Said: "In short, I think it's highly unlikely that we're 'talking about the same thing'"

Yes It is that easy and I think we are all talking about the same things - we, like marry, know everything about the causal properties of the qualitative nature of consciousness - we just don't yet know how to qualitatively interpret this abstracted knowledge we have (the abstracted knowledge we get from our abstracted senses and instruments) that describes such.

Brent Allsop

2016-11-02
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

RE: Yes, for the trivial case - but once you understand this, the general qualitative idea covers everything about the so called "hard problem".

First, I think the division of the question of human consciousness into a so-called easy and hard problem is sheer bunkum – a silly red herring.  There is no easy problem in this case, and Chalmers’ description of the so-called “hard problem” is, to my mind, hopelessly vague and quite useless. Care to tell me what you think the "hard problem" is. (I’ve seen several versions…)

Second what “general qualitative idea”?  How did this enter discussion of sight in a fly?

Re: “Yes, but, I wouldn't say "wrong" in most cases since the intelligent behavior is also inverted i.e. the intelligent behavior is indistinguishable etc”

What “intelligent behaviour”? Where did this notion suddenly come from? What does it mean in this context?  In any case, whether it’s wrong or right or whatever is of absolutely no consequence here. We are talking about the relevance of an instance of mere perception (eg sight in a fly) to the nature of human consciousness.  

RE: “Derek said; "Unless you think human consciousness is no more difficult to understand than the sight mechanisms of (e.g.) a fly". BRENT: ‘’Yes, that could very well be the case.”

Once you concede this (and I don’t really see another option) your whole argument falls to the ground. You are admitting that you may not be talking about human consciousness at all – which of course is highly likely. Unless, to make the point again, you think human consciousness is no more difficult to understand than the sight mechanisms of (e.g.) a fly.

DA


2016-11-02
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
You asked what others I was referring to when I wrote: "Personally I find it hard to believe that Derek does not understand what others mean by consciousness". Well any that use the term what-it-is-like for starters, and also what Bertrand Russell meant by sense data, or what Berkeley was suggesting, and by the way Berkeley was denying that the physical existed http://plato.stanford.edu/entries/berkeley/#2 

---
In his two great works of metaphysics, Berkeley defends idealism by attacking the materialist alternative. What exactly is the doctrine that he's attacking? Readers should first note that “materialism” is here used to mean “the doctrine that material things exist”. This is in contrast with another use, more standard in contemporary discussions, according to which materialism is the doctrine that only material things exist.

---

I could go on and mention Bretano, Malebranche and others but there seems little point as it will include pretty much any philosopher of mind that was not an eliminitavist (arguably behaviourists are either eliminitavists or were not discussing what was consciously experienced as the others were, but had just redefined the term). Or what Jackson was considering Mary not to know in the thought experiment that the original post is connected to. 

Regarding your inability to imagine what atheists imagine death with no after life to be like, you again go back to relating it to knowing what death is like, but as pointed out to you earlier http://philpapers.org/post/21670 that reply is inappropriate. It is equivalent to claiming that you could not understand any mythological description of an afterlife because you did not know what death would be like. Are you claiming that you cannot understand any mythological description of an afterlife? Because if not then it seems to me that for the second time (even after it had been explained to you why the response was inappropriate) you tried for an inappropriate response to a question, the inappropriateness of which I believe would have been apparent to the majority of non-philosophers.  If the latter was the case then what is your explanation for giving such a response, did you forget that it had already been pointed out why the response was inappropriate and were unable to see that it was yourself?

Oh so you did understand the brain-in-the-vat thought experiment, I was not trying to be disingenuous. You did mention earlier that you understood it http://philpapers.org/post/22082 but when I went onto use that to explain what I was referring to by consciously experiencing  http://philpapers.org/post/22142 you replied

---
 When I said “Of course, I understand what's being said” I meant I understood the words – the Hollywood scenario being described. Philosophically, I think it is pure, unadulterated, juvenile nonsense. 
---

giving me the impression that you had understood the words but not what he had meant by experience, because if you had known what was meant then you would have known what I meant by consciously experience, because as I had explained in the post I gave a link to ( http://philpapers.org/post/22142 ) 

---
The term "consciously experience" as I am using it in the original post can considered as a synonym for the 'experience' expression as Putnam was using it. And the term "what it is like" refers to what the conscious experience is like, and in the Putnam imagining the evil scientist was imagined to be able to determine what it would be like.

---

So were you suggesting that several posts back you were aware of how I was using the term consciously experience, because you understood what Putnam meant by experience, and I had told you that you could consider that I was using the term "consciously experience" as a synonym for the word "experience" as Putnam was using it, or are you going to claim that you did not understand what Putnam meant by the term "experience", or are you going to go for the misinterpret what Putnam meant approach (and so not be able to understand what a ten-year old could)?

Yours sincerely,

Glenn 

 




 





2016-11-02
RoboMary in free fall
Reply to Brent Allsop
Dear Brent,I think you have on off at a complete tangent from what I was talking about. 

My assumption is that a human subject experiences red when it has certain causal relations to the world - in simple terms the messages for sensing red are its input. When it experiences blue there will be different signals coming in - a different relation to the world. And another for green and yet others for pink, purple, brown, magenta, orange, puce, black, chestnut, turquoise etc. 

It is important to note that we are talking of a human subject - whatever element of the brain that is - not a human. There is no such thing as a human person. There is a human body but that is not what experiences. Then there are things that experience within human bodies and we assume they are bits of brain for good empirical reasons that have been around since Hippocrates.

A robot contains no element that has an input that has enough possibilities to cover all the colours above. The elements that get information are binary logic gates that only get four options - 00, 01,10, 11. So no room for magenta, or the smell of coffee. So there is nothing we know of in a robot that could relate to the world the way a human subject does - nothing with enough possibilities. Robots do not experience the world any more than persons do.

2016-11-03
RoboMary in free fall
Reply to Glenn Spigel

Hi Brent

Re: Regarding your inability to imagine what atheists imagine death with no after life to be like, you again go back to relating it to knowing what death is like, but as pointed out to you earlier http://philpapers.org/post/21670 that reply is inappropriate. It is equivalent to claiming that you could not understand any mythological description of an afterlife because you did not know what death would be like. Are you claiming that you cannot understand any mythological description of an afterlife?

There have been many, many “mythological descriptions” of an afterlife, some of them quite fascinating – the beliefs of the ancient Egyptians, for example. (And these were not “myths” for them, by the way: they were eternal Truth.)

But all that is beside the point because you asked me what atheists think about the matter. First, as I pointed out, it would be difficult to generalise. How do I know what all atheists think? But assuming you mean people who firmly believe there is no afterlife, then my reply must be, once again, that I cannot see how they could possibly know what “death with no after life” is “like”. To repeat my earlier point:  to compare to things, one needs to know something about both. Our putative atheist presumably knows something about life, but what does he know about a “non-life” (i.e. death)? Nothing. Nada. So, the very notion of a comparison (a “like”) falls to the ground here, as I said.

By the way, do you know what non-life is “like”? If so, please let me know. I would be fascinated to learn. (I am not an atheist but I am an agnostic – which in this context boils down to much the same thing.)

I find the rest of your post (re brains in vats, Putnam etc) a little hard to follow. Perhaps you could condense it a little?  

I should just perhaps add the footnote that I regard the “Nagel” proposition: “There is something it is like to be conscious” as sheer, unadulterated hogwash. I have explained why on other threads but I will spare you the explanation here, unless you are interested.

DA


2016-11-03
RoboMary in free fall
RE: "There is no such thing as a human person"

Well, that puts paid to any revival of humanist thought!

But wait: "...there are things that experience within human bodies and we assume they are bits of brain "

So perhaps there is a glimmer of hope - a "thingism" perhaps?  A bits-of-brains-ism"?

DA


2016-11-03
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
The post was from me not Brent. 

So you acknowledge that you do not need to know what death was like to understand what an imagining of it was.

I can understand the distinction you are now making with an atheist assumption of a lack of an afterlife from there being an afterlife to imagine. Especially when it comes to imagining it in terms of what-it-is-like. Since for a comparison, there must be at least two to compare (a comparison is not a unitary operator so to speak) and if one is "nada" then it can be thought of as an absence of something which can be compared. Thus I think the idea that it-is-like-something to consciously experience (the "nada" being eliminated). But since the "nada" is not ambiguous neither is other than "nada" (thus if you were aware of anything at all). 

Jackson's idea that if there were features other than those observable from a third person perspective meant that reality was not physical entailed the assertion that all physical features were observable from a third person perspective. A flaw in his argument. 

A zombie should not be thought of as a human minus certain features (as people like Dennett thought of it). Instead it should be thought of as where one only imagines those features described (those observable from a third person perspective), and does not add the features which one knows the human has but which were not mentioned in the description, but instead imagines that the features in the description were all that it had. 

Yours sincerely,

Glenn

P.S.

Regarding thought experiments, in scientific experiments to do with theory (as opposed to just see what happens type experiments, in which it is easier to check the result rather than work it out given current understanding ) there can be a hypothesis, and a null hypothesis, and the experiment result can distinguish between which it was correct. The thought experiment can allow you to express your conclusions regarding whether it was result was as expected given the hypothesis or not without knowing what the result actually was. In other words the conclusion you would draw if the experiment showed the hypothesis to be correct, and the conclusion you would draw it the hypothesis was not correct. The EPR experiment that I mentioned (plato.stanford.edu/entries/qt-epr/ ) allowed the realisation that quantum randomness implied "spooky action at a distance" without doing the experiment, but considering the conclusions that would be drawn, or perhaps as is more likely encouraged the thinking of an experiment which highlighted the implications of the idea (the implications being logically derived), which could then be tested. Though with the latter, one could still consider the conclusion drawn from the hypothesis being confirmed or rejected.

2016-11-03
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

Sorry about the mix up in names - and apolgies to Brent too.

I shall reply to your points a little later.

DA

2016-11-03
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

RE: So you acknowledge that you do not need to know what death was like to understand what an imagining of it was.

Not really what I said. People can of course imagine all kinds of things about a possible life after death - Heaven, Hell fires, crossing the river Styx, the list is endless.  But if you ask me – an agnostic – what I imagine life, or non-life, after death to be, I have strictly nothing to say. You might as well ask me how to give the proof of e=mc2 or something of the kind. I have absolutely no idea. The quote from Hamlet I gave you sums up my view perfectly.

RE: “A zombie should not be thought of as a human minus certain features…”

I was just quoting from Chalmers. He is the inventor of the “zombie” idea, as I understand it. And the key part of his definition is “A zombie is physically identical to a normal human being, but completely lacks conscious experience.” This of course is pure silliness. One is required to subtract an unknown – consciousness.

RE: “The thought experiment can allow you to express your conclusions regarding whether it was result was as expected..etc”

“Result”? What “result”?  The scientist sets up his experiment and waits to see what happens – what the “result” is. The philosopher chooses his own so-called “result”.

DA



2016-11-03
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

I had earlier in post http://philpapers.org/post/23102 written

----------------------------

Oh so you did understand the brain-in-the-vat thought experiment, I was not trying to be disingenuous. You did mention earlier that you understood it http://philpapers.org/post/22082 but when I went onto use that to explain what I was referring to by consciously experiencing  http://philpapers.org/post/22142 you replied

---
 When I said “Of course, I understand what's being said” I meant I understood the words – the Hollywood scenario being described. Philosophically, I think it is pure, unadulterated, juvenile nonsense. 
---

giving me the impression that you had understood the words but not what he had meant by experience, because if you had known what was meant then you would have known what I meant by consciously experience, because as I had explained in the post I gave a link to ( http://philpapers.org/post/22142 ) 

---
The term "consciously experience" as I am using it in the original post can considered as a synonym for the 'experience' expression as Putnam was using it. And the term "what it is like" refers to what the conscious experience is like, and in the Putnam imagining the evil scientist was imagined to be able to determine what it would be like.

---

So were you suggesting that several posts back you were aware of how I was using the term consciously experience, because you understood what Putnam meant by experience, and I had told you that you could consider that I was using the term "consciously experience" as a synonym for the word "experience" as Putnam was using it, or are you going to claim that you did not understand what Putnam meant by the term "experience", or are you going to go for the misinterpret what Putnam meant approach (and so not be able to understand what a ten-year old could)?

----------------------------

But you did not answer. Could you do so? 

Also are you claiming not to be aware of anything?

Yours sincerely, 

Glenn



2016-11-03
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “…giving me the impression that you had understood the words but not what he had meant by experience..”

While important, the term “experience” is also very vague.  The ridiculous brain-in-vat idea does absolutely nothing to clarify it. Why would it? It’s just a puerile Hollywood fantasy.

RE: “And the term "what it is like" refers to what the conscious experience is like, and in the Putnam imagining the evil scientist was imagined to be able to determine what it would be like.”

Good grief, “evil scientist”. Is this philosophy or kiddies’ corner?

The “what is it like” thing comes from Nagel, and the hallowed formulation is actually not the form you are using but “There is something it is like to be conscious”. As I have said, I have shown on other threads that this Nagel mantra (I call it that because it is trotted out again and again so mindlessly) is pure bilge.  Happy to copy my proof into a post on this thread if you wish.

Re “because you understood what Putnam meant by experience,..”

Can you show me where I said that? 

Re: “Also are you claiming not to be aware of anything?”

No. I am aware, for example, of the pointlessness of all this stuff about zombies, robots, brains in vats, “something it is like”, evil scientists, etc.  This area of philosophy seriously needs to grow up.

DA

PS: I am happy to answer your questions, Glenn, but you need to quote me accurately.


2016-11-04
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

Regarding where you wrote: 

---

Re “because you understood what Putnam meant by experience,..”

Can you show me where I said that? 

---

Well regarding the paragraph about the brain in a vat you wrote in post  http://philpapers.org/post/22082 :

"Of course, I understand what's being said..."

And in post  http://philpapers.org/post/22966 you wrote:

"You’re being a mite disingenuous here, Glenn. I explained to you that like anyone I understood the (ridiculous) brain in vat scenario. (A ten-year-old could understand it.) "

But now you seem to be stating that you did not understand what Putnam meant by 'experience' (I have added emphasis in the text to highlight the part I am referring to) when he stated:

"There seem to be people, objects, the sky, etc; but really all the person (you) is experiencing is the result of electronic impulses travelling from the computer to the nerve endings. The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to  'see ' and  'feel' the hand being raised. Moreover, by varying the program, the evil scientist can cause the victim to  'experience'(or hallucinate) any situation or environment the evil scientist wishes. He can also obliterate the memory of the brain operation, so that the victim will seem to himself to have always been in this environment. It can even seem to the victim that he is sitting and reading these very words about the amusing but quite absurd supposition that there is an evil scientist who removes people's brains from their bodies and places them in a vat of nutrients which keep the brains alive..."

But if you did not understand what he meant by the words experiencing or experience, why were you claiming that you could understand what he was suggesting, and that even a 10-year-old could? 

If you thought it was ambiguous then it would be useful if you mentioned the different interpretations that you were considering.

Regarding the awareness issue, have you ever been aware of any visual sensations, or auditory sensations, or sensations of pleasure or pain? If you are going to seriously deny ever having been aware of such sensations, then perhaps visit your university psychology department because they might be interested in using you as a case study, assuming they do not think you are faking it.  As you seem to be claiming that for all you know all your brain activity is subconscious, and that would be interesting if it were true and all your brain activity was subconscious. Because you would be effectively a philosophical zombie, but unlike philosophers imagined, zombies such as yourself would not act the same as people that consciously experience, being unable to grasp the concept of consciously experiencing. A problem I think you would face though, is people not believing you. Do you think that all your brain activity might be subconscious?

Glenn



2016-11-04
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: Well regarding the paragraph about the brain in a vat you wrote in post  http://philpapers.org/post/22082 : "Of course, I understand what's being said..."

Yes, as I said, I understand the puerile scenario. I did not say – and still do not say- that I understand “what Putnam meant by experience” (see also below). I think any analysis of human experience/consciousness founded on such silliness is almost guaranteed to be a dead end.

Re the Putnam quote: There are holes a mile wide in this. E.g.:

(1)   Some of Putnam’s ridiculous little story is simply about perception – or at least what he presumes would be perception (he gets around that problem by scare quotes - e.g. “see”) So those elements could just as readily be about a fish or a fly, not human experience/consciousness.

 (2)   He talks about the victim “[experiencing]'(or [hallucinating]) any situation or environment the evil scientist wishes”. Now, what does he mean by “situation” here? A fly is in a “situation” when it sees something about to swat it. A human “situation” would be one that involved human experience, would it not? So, Putnam must already know, and be able to say, what human experience is. And if he can say that, he must also, presumably, be able to say what human consciousness is – since human consciousness, one assumes, is inseparably bound up with human experience (or if it isn’t, he would need to say why not). Now, if he knows all that (and of course he doesn’t acknowledge he knows all that) why not just tell us instead of taking us through this absurd charade? And if he doesn’t know all that, his whole charade begs huge questions – e.g. what does he mean by “situation”? (This by the way is usually the basic problem with all these silly fantasy things analytic philosophers drag up. Scratch the surface and you'll quickly find unacknowledged assumptions. The zombie thing is a classic example. Putnam’s is another.)

RE: “If you are going to seriously deny ever having been aware of such sensations, etc.

Again, Glenn you do need to quote me accurately. I do not recall ever denying such a thing. Why on earth would I deny having had “visual sensations, or auditory sensations, or sensations of pleasure or pain”?  

DA


2016-11-04
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I think the problem may be that I have no idea what the viewpoint you are arguing for is. If I knew that then I might be able to interpret the way you want to recast what I say into your own terminology.

The problem with dynamic units within a robot experiencing like those in us is not really one of analogue and digital. That is a distinction that only really applies to our devices. The more relevant issue is the number of degrees of freedom. Someone with red-green colour blindness has one less degree of freedom to their experience than the rest of us. That is something we can show to fit the empirical evidence best. Whether red-green colour blind people see red as red or brown is I think an unknowable.

I am not sure where you are going with questions about spiritual beings. What is the definition of spiritual here? For most questions of this sort an anthropic answer tends to be all we have. This world has to be the way it is because it is the one we find ourselves in and it is that way. There is no relevant why question I think. I agree that it is reasonable to say that there must be ineffable reasons - not least to explain why the world seems to stick to the same rules - but that does not imply 'design' in the sense of some causal token act of design. 

2016-11-04
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Were you considering that a camera phone with a facial recognition system and sound recording has visual sensations and auditory sensations? 

Do you distinguish between subconscious (or unconscious) brain activity and conscious brain activity? 

Yours sincerely,

Glenn

P.S.

Hi Jo - If you are reading this, I will have run out of posts for today, but will get back to you on Sunday.



2016-11-04
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

To me, the general hard problem must be about what is the qualitative nature of perception in particular and qualitative nature of consciousness in general.  For example, I could be experiencing or perceiving your greenness, when I look at something red.  What where and how is the greenness or redness quality?  Most people consider this qualitative nature to be "ineffable".  i.e. how can you know if I really experience your greenness, when I see red?  Most people think you can't know what other people perceive and experience things like - making this what most people would consider a hard problem.

As far as intelligent behavior, I'm talking about you're and my behavior, even if, when I look at red, I experience what it is like for you to see green.  Despite the qualitative difference, we both mostly behave identically.  We can both pick the red strawberry from the green leaves, and we both call the strawberry red, and the leaves green - even though one of us is really perceiving the inverse.

As far as perception of a fly, we have the same problem as the difference between you and I.  The fly could perceive your redness, when it sees red, or it could perceive your greenness, when it sees red, or it could be something else qualitatively completely different.  We would be able to know such things, if we'd interpret what we are observing in other brains (including fly brains) correctly

The primary part of consciousness I'm talking about, is the conscious visual experience we have, when we look out at the world.  We consciously perceive the world.  There is lots to consciousness, but this consciousness of visual perception is what has the most obvious and easiest to talk about qualitative properties.  It is qualitatively like something to perceive red.  It is also qualitatively like something to love someone, but it's much easier to focus on color, first.  If we can figure that out, all the rest will be a similar solution.

Brent Allsop


2016-11-04
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: Were you considering that a camera phone with a facial recognition system and sound recording has visual sensations and auditory sensations? 

Huge problems here, Glenn. We say colloquially that the kind of gizmo you’re talking about “recognises” faces and sounds (advertisers love to say it too…) . But what does “recognise” mean here? Does it mean the same as we mean when we say that a person recognises a certain face or sound?  Are we, in other words, happy to say that the gizmo is “conscious” in the same way a human being is?  If that is what you are saying, on what basis do you say it?

This is just another form of the problem I pointed out to you in the Putnam nonsense about a brain in a vat, where the word “situation” sets a similar trap. Philosophers are supposed to excel at detecting such traps and not falling into them. Few seem to manage it these days – not even the Putnams, Chalmers, etc.

RE: Do you distinguish between subconscious (or unconscious) brain activity and conscious brain activity? 

I don’t see the connection with the first question. And the phrase “brain activity” poses a problem. But setting that aside, if you’re asking me if I think there is a subconscious and if it plays part in our lives, I would answer: probably. But the whole field is very debatable. Who is an orthodox Freudian these days, for example? And psychoanalysis generally is under a bit of a cloud now.

DA


2016-11-05
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

If we are talking about the so-called “hard problem” we should probably go back to its source. This is the key paragraph in Chalmers’ definition. (He seems to be the source.)

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

I think this is a joke. As is his description of the so-called “easy problems”.

Happy to say why if you are interested.

DA


2016-11-06
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

RE:  Are we, in other words, happy to say that the gizmo is “conscious” in the same way a human being is?

What do you mean by "conscious"?

The Mark 19 robot in the original post, supposing that it could pass the Turing Test, would it be conscious by definition given the way you are using the word?

And regarding the thought experiment in the original post, where you thinking that the robot in room 3 would be seeing colours and if so what colour(s) would you be thinking the robot in room 3 would be seeing?

Yours sincerely,

Glenn 




2016-11-06
RoboMary in free fall
Hi Jo, 

I am not arguing in this post for a certain point of view, I am just questioning other peoples, and using some arguments to do so. So thanks for offering to recast what you  say into the terminology I would use, but I do not think it is that simple, I do not think it is a case of us conceiving the same ontology but using different terminology to describe it. 

As I understand your belief:

There are forces.
Those forces are in a dynamic relationship with other in space and time.
Beings differ from each other because each is what it is like to be that dynamic relationship in a differing proximity of space and time.
The entities (including fields) referred to in physics are reducible to patterns in the dynamic relationships of those forces.
Cars, neurons etc., are reducible to the aforementioned entities referred to in physics.

Do you think from my paraphrasing of your position that I have roughly understood it?

If so, then are you suggesting that there is no relation between human neural activity (and perhaps any surrounding activity) and what is consciously experienced, such that if you knew the how they related and knew the relevant neural activity you could (assuming you had the ability) work out what the conscious experience was, or was it simply that you are not denying such a relationship but do not know what it is or even what type of category it would fall into (I had supplied a few options)? 

Yours sincerely, 

Glenn



2016-11-06
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,You are probably pretty much on track with my metaphysics.

I believe there is a universe and that universe is constituted by dynamic relations between units of force (strictly speaking in a Leibnizian rather than Newtonian sense: units of 'drive'), more technically described as modes of excitation of fields in QFT, and all else ('other') in the form of fields of potentials.

These units differ from each other in having different 'dynamic autobiographies' that are their relations to other from a particular standpoint (which need not be a unique location in spacetime). I would not say a unit IS what it is like to have such a relation. That sounds like a category mistake. But 'what it is like to be' that unit is what it is like to have its relation to other.

Cars and neurons are constituted in very complex ways by such units.

The putting into own words issue I think arises more with the next bit. I am not sure what a relation between neural activity and conscious experience would be. I would prefer to say that I think it very likely that there is some rule of correspondence between token instances of neural reception of signals (what Kripke called stimulation and Papineau confusingly changed to firing, which is something quite different) and token instances of experience of the sort that might be an isomorphism or even an identity. By that I mean that for any set up of an instance of a neural reception of signals there is only one experience that can arise and what that is is determined by some systematic rule that hopefully would make sense if we knew what it was. 

The next point is that since we do not now what dynamic unit in a neuron or group of neurons is the receiving unit in the experiences we report we have no way at present of pinning down the rule of correspondence on the basis of direct evidence.

But the more important point is that for at least some aspects of experience there can never be a fact of the matter whether or not two experiences are 'the same' because there is no possibility even in metaphysical terms of comparing the experience of one unit with another. So there is a principled epistemic block to ever demonstrating the relation of correspondence at this level.

I think this block would apply to questions like 'is my red your green?'. I am actually pretty sure that incoming signals never mean just red or green but we can maybe bypass that. But, as indicated before, the epistemic block might not apply to 'my five oranges being your two oranges'. We may be able to say that if we start allowing that sort of disparity you start getting systematic knock on effects on your rule system that makes it impossible for there to be any rule that makes any sort of sense. People have argued this for spectral inversion - that blue and yellow cannot be reversed because the two colours relate differently to other colours. For instance there is very dark blue but no very dark yellow. This has to do with blue and yellow entailing different ranges of saturation. If you alter the shade of yellow without the hue in colour wheel terms our receptors re-interpret it as a different sensed hue. However, I am not totally convinced that this argument holds. 

I am still unclear what you mean by category of relation here. As I see it we have a relation of correspondence that may be isomorphic but suffers from an epistemic block. Can we say more than that?

2016-11-07
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

Re: “What do you mean by "conscious"?”

That is precisely my point.  You are using the word “recognise”. I am asking you what you mean by the word.

If a human being “recognises” someone, we would normally say that such an act is part of what’s covered by the notion of conscious behaviour. (Not conscious in the mere sense of being awake, but in the sense of an operation of human consciousness.) I am asking you if that's what you mean when you use the term “recognise” in relation to the kind of electronic gizmo you’re talking about.

I gather some philosophers think it is the same thing. Chalmers is one, I think? My view is that in the absence of sound proof – which would of course need to include a persuasive definition of what consciousness means in the case of human beings – that claim would be just a wild guess, unworthy of any respectable philosopher.

RE: “And regarding the thought experiment in the original post, where you thinking that the robot in room 3 would be seeing colours and if so what colour(s) would you be thinking the robot in room 3 would be seeing?”

The same problem arises here. What do you mean by “see”?

A philosopher always needs to be on the look-out for hidden, unacknowledged assumptions. Though that seems to be a lost art in the philosophy of consciousness ...

DA


2016-11-07
RoboMary in free fall
Hi Jo, 

You wrote:

I am still unclear what you mean by category of relation here. As I see it we have a relation of correspondence that may be isomorphic but suffers from an epistemic block. Can we say more than that?

Well I outlined several categories of relation earlier, with examples distinguishing the difference:

Contextual Relation: The conscious experience relates to what the underlying (which could be a dynamic interaction of forces) represents given the context. So using the example in the original post the robot would consciously experience red in the first room, and blue in the second, and a switch between red and blue in the third, because that is what processing of the 255.0.0 signal represented in each of those contexts. Same signal in all three cases, and same processing, but a different representation based on context.

Symbolic Relation: The conscious experience relates to the underlying symbolising a certain conscious experience. So what the underlying represents depends on the symbolism. So if the processing of the 255.0.0 symbolised red, then in all three rooms the conscious experience would be of red, if it symbolised blue, then in  all three rooms the conscious experience would be of blue, if it symbolised green then in all three rooms the conscious experience would be of green, if it symbolised a sound or something else then in all three rooms the conscious experience would be appropriate to the symbolism. So symbolic relation is similar to contextual relation in that the processing would represent something, but unlike a contextual relation what it represents would not vary depending on the context, a certain state will always represent or symbolise the same thing irrespective of the context.

Underlying Attribute Relation: The conscious experience directly relates to features of the underlying which could be the dynamic interplay of forces. So if the conscious experience was for example considered to related to the movement of electrons (which could perhaps be thought to reduce to a dynamic relation of forces) then the conscious experience could be thought to perhaps be a brightness depending on the amount of electron movement for example, or perhaps if was suggested to relate to the making and breaking of chemical bonds then that too would be an Underlying Attribute Relation. So it is distinct from either the Contextual or Symbolic Relation in the sense that with the idea of Underlying Attribute Relation the conscious experience would be different if those attributes were different, whereas with the other two quite different underlyings could be thought to represent/symbolise the same thing. 

If there is anything you are unclear of regarding a category, then perhaps mention the first category you are unclear of, and what it is about it that you are unclear of.  

Regarding the robot in the original post in Room 3 (you can assume it can pass the Turing Test etc.) would you expecting any it to experience any visual sensations, and if so what would you expect them to be?
Regarding you comment "can we say more than that?", I am not totally sure what you mean. I do not feel that you are justified in arriving at the conclusions you did, but you are obviously allowed to guess, and when guessing there is no reason to stop there, as you may just find that it is the relation of correspondence between the activity and the conscious experience that is the interesting part. 

Yours sincerely, 

Glenn 



2016-11-07
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I am afraid that none of your definitions help.They are the sort of definitions that philosophers use without thinking through exactly what their words can mean with the result that the definitions are to a large extent circular. The functional relation between two things will be the functional relation. The dynamic relation will be the dynamic one. And so on for all the various ways one could define functional or dynamic or whatever. The problem with the argument about functionalism is that functionalism is not really an alternative explanation to something else. The relation between two things considered in functional terms is the functional relation - a tautological truth and so explanatorily empty. Where functionalism tries to claim explanation it becomes wrong.

I am at present reading the Philosophy of Universal Grammar by Wolfram Hinzen who is a philosopher of language in Barcelona. It contains about 350 pages of dense argument about how at present we are completely confused as to what meaning is about and how one might be able to unpack it by going through technical issues that arise in language syntax. It illustrates just how many layers of misconception one has to peel back before one gets at useful debate. In comparison the debates about Mary appear to me to be just about not having unpacked the basics. I think I have probably given explanations for all the arguments I think are relevant here. Introducing terms like symbol does not help because nobody really knows what they mean by a symbol - or at least there is no general agreement. And if we agee what we mean by symbolic relation then sure, it will be the symbolic relation - but so what. As far as I can see all we are looking for are the rules of correspondence between the local brain dynamics and the experience - which actually leaves out any issue about the distal events that are represented, which seem to have worked their way in to your relations somehow.

2016-11-07
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
1) When I wrote:

"Were you considering that a camera phone with a facial recognition system and sound recording has visual sensations and auditory sensations?"

I was just using the normal meaning of "facial recognition system", with regards to computers. I consider it to be a slightly anthropromorphic phrase, because I am aware of another meaning to "recognition" (connected to consciously experiencing, a concept you seem unable to understand, and so it would be a waste of time at this stage to try to explain it to you). But as long as you are aware of what a shop assistant would mean if it told you that the camera phone had a facial recognition system and sound recording, then you should be able to answer the question regarding whether it has visual sensations and auditory sensations, using the sense that you took the phrases to mean when you wrote:

'Why on earth would I deny having had “visual sensations, or auditory sensations, or sensations of pleasure or pain”? '    

2) In your last response you mentioned the notion of conscious behaviour, and I suspect that you regard something as conscious by definition if it displays what you term as conscious behaviour. So just to confirm, could you answer the question I asked you previously:

"The Mark 19 robot in the original post, supposing that it could pass the Turing Test, would it be conscious by definition given the way you are using the word?"

3) Regarding your question about what I meant by "see" when I asked you the question:


“And regarding the thought experiment in the original post, where you thinking that the robot in room 3 would be seeing colours and if so what colour(s) would you be thinking the robot in room 3 would be seeing?”


Could you just use your understanding of the word "see" if someone were pointing to a tree and asking you whether you can "see" it, when answering the question.

4) Can you explain the ambiguity you were aware of with the question mentioned in (3) and the term "see"?

As a side issue, I do not think that you cannot understand what a philosophical zombie is supposed to be, rather it seems to be that you are claiming not to understand what features we have that makes us different from a philosophical zombie. You seem to be thinking that human beings have no features other than behavioural features which are reducible to the behavioural features of the entities studied in physics. Thus given that you claim to not understand the distinguishing feature, you are already conceiving of us as philosophical zombies, and therefore are having difficulty understanding how a philosophical zombie is conceived to be different from how you are conceiving human beings to be. Though please do not get distracted by this at the expense of not answering the questions.
 
Yours sincerely,

Glenn 



2016-11-07
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “I was just using the normal meaning of "facial recognition system", with regards to computers. I consider it to be a slightly anthropromorphic phrase, because I am aware of another meaning to "recognition" … etc

You are not seeing my point. (although you seem to be getting close to it when you write "a slightly anthropomorphic phrase").I’m saying that the word “recognition” in (to quote you) “the normal meaning of ‘facial recognition system’, with regards to computers” may mean something quite different from its meaning with regard to human beings. You keep wanting to equate the two. I am pointing out to you that, for all we know, this may be a huge error. To assume they are the same we would need to be able to say with confidence what human consciousness is. Can you do that? 

RE: “… could you answer the question I asked you previously: "The Mark 19 robot in the original post, supposing that it could pass the Turing Test, would it be conscious by definition given the way you are using the word?"

I have no idea. Unlike you, I am by no means sure what the word “consciousness“ means with regard to human beings. And if I don’t know, how can I make the kind of equivalency you want to make. Do you know?

4) Can you explain the ambiguity you were aware of with the question mentioned in (3) and the term "see"?

It’s perfectly simple. We say colloquially that a phone “sees” X. But we understand “see” (“recognise” etc) in our own terms – i.e. as an element of human consciousness.  Does a phone “see” in that sense? You are simply assuming it does – a huge assumption.

RE: ‘I do not think that you cannot understand what a philosophical zombie is supposed to be”

Oh, I understand what it is “supposed” to be. That’s why I think it is ridiculous nonsense.

DA

 


2016-11-08
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Regarding a philosophical zombie, you wrote:

 ' Oh, I understand what it is “supposed” to be. That’s why I think it is ridiculous nonsense. ' 

If you think you understand what a philosophical zombie is supposed to be, and your understanding is that it is supposed to be a physical equivalent of a human but unlike a human lack consciousness, then do you also realise that the term "consciousness" is not being used to describe any type of behaviour, in other words it is not being used as a behaviourist would use the word? 

If you did not realise that, then you did not understand what a zombie is supposed to be.

If you did realise it, then what (other than a form of behaviour) were you thinking the word referred to?

Yours sincerely,

Glenn


2016-11-08
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

RE: “If you think you understand what a philosophical zombie is supposed to be, and your understanding is that it is supposed to be a physical equivalent of a human but unlike a human lack consciousness, then do you also realise that the term "consciousness" is not being used to describe any type of behaviour, in other words it is not being used as a behaviourist would use the word? “


I have no idea how the word consciousness is being used in Chalmers’ “zombie” nonsense. He doesn’t say what he means by it so it’s absolutely anyone’s guess.

That’s why the whole thing is so transparently silly. One is being asked to subtract an unknown (and it’s doubly silly since, presumably, the whole purpose of the exercise is to throw light on what consciousness – this same unknown thing one has to subtract in the first place – is!!)

It’s philosophical absurdity. It would fit quite well into one of Ionesco’s plays…Yet how much ink has been split on it! (I wonder if there is a strain of masochism among philosophers of consciousness?)

DA

2016-11-08
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
If you did not understand what Chalmers meant by a philosophical zombie, because you did not know what he meant by consciousness, whose version of a philosophical zombie were you claiming to understand what it was "supposed" to be? 

Regarding Chalmers version, I think it is easy to understand what he means by consciousness. However I also think it would be easy to play a behaviourist claiming not to not understand it, but I do not know why anyone would. People would not be obliged to explain it to the behaviourist, and such a person were to claim they were not able to follow the conversation, then that would be their problem not anyone else ' s.  Also any claim that such a behaviourist may make about there being no utility to any such discussions holds no weight, since they would also be claiming ignorance of what is being discussed. 
How do you differ from a behaviourist by the way? (As I understand it, behaviourism was given up on). 

Yours sincerely,

Glenn  

    




2016-11-08
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

Re:  If you did not understand what Chalmers meant by a philosophical zombie, because you did not know what he meant by consciousness, whose version of a philosophical zombie were you claiming to understand what it was "supposed" to be?

It is presumably supposed to be an idea that tells us something about the nature of human consciousness (at least I assume that’s what it is – unless Chalmers is just fixated on Hollywood zombies for their own sake). Unfortunately, as I pointed out to you, it tells us precisely nothing. It is nonsense.

RRe: “Regarding Chalmers version, I think it is easy to understand what he means by consciousness”

Really? So want do you think he means?

RE:”How do you differ from a behaviourist by the way?”

Behaviourists – Skinner anyway – wanted to deny that there is any such thing as human consciousness (see his silly book Beyond Human Dignity or whatever it was). Everything was just "behaviour" – like watching rats in mazes. Not surprisingly, behaviourism fell in a heap. Though neuroscience strikes me as behaviourism reborn in another guise. It's behaviourism on a micro scale.  Some silly ideas never die.

DA


2016-11-10
RoboMary in free fall
Hi Jo, 
Do not worry about what I label the categories (you seem to be concerned that one uses the term "symbolic"), I can call them Category 1, Category 2, and Category 3 if you prefer. Were you suggesting that in your theory the answer would fit in with Category 2 (what I was labelling as a symbolic relation)? Remember you can always offer a different category type. I assume you understood all three.

You write:

"As far as I can see all we are looking for are the rules of correspondence between the local brain dynamics and the experience - which actually leaves out any issue about the distal events that are represented, which seem to have worked their way in to your relations somehow."

The distal events do not work their way into the Symbolic Relation (Category 2) or the Underlying Attribute Relation (Category 3), only the Contextual Relation (Category 1).  And some theories might opt for the Contextual Relation (Category 1), perhaps with some idea that like the wave packet it covers everywhere, or something else, I do not know, it would depend on the theory, but would imply some sort of ESP which I think could be disproved (with bionic eyes perhaps).  

You do think that the conscious experience does give information about the distal events though do you not? Did you not assume that the conscious experience gave you information that humans have brains, which have neurons etc., which in part led to your theory? 

 You wrote earlier in http://philpapers.org/post/23194:

"The problem with dynamic units within a robot experiencing like those in us is not really one of analogue and digital. That is a distinction that only really applies to our devices. The more relevant issue is the number of degrees of freedom. Someone with red-green colour blindness has one less degree of freedom to their experience than the rest of us. That is something we can show to fit the empirical evidence best. Whether red-green colour blind people see red as red or brown is I think an unknowable."

So imagine the processing in the Mark 19 used logic gates, and could pass the Turing Test, and make distinctions at least as good as a human (which would presumably require it to have at least the same number of degrees of freedom). Would you be considering it possible that it could have conscious experiences of trees and cars etc., in a similar way to you?

Yours sincerely,

Glenn 


2016-11-10
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Are you stating that when you wrote regarding a philosophical zombies:

 ' Oh, I understand what it is “supposed” to be. That’s why I think it is ridiculous nonsense. '  

That you meant:

'It is presumably supposed to be an idea that tells us something about the nature of human consciousness (at least I assume that’s what it is – unless Chalmers is just fixated on Hollywood zombies for their own sake)'

Because you also wrote that you did not know what Chalmers meant by consciousness. So it seems that all you are saying is that you presumed that philosophical zombies are supposed to be an idea that gives you an idea of the meaning of a term that you do not know the meaning of. Can you think of any features that a human being has which are not be reducible to features discussed in physics?  If not then as I wrote earlier:

"... I do not think that you cannot understand what a philosophical zombie is supposed to be, rather it seems to be that you are claiming not to understand what features we have that makes us different from a philosophical zombie. You seem to be thinking that human beings have no features other than behavioural features which are reducible to the behavioural features of the entities studied in physics. Thus given that you claim to not understand the distinguishing feature, you are already conceiving of us as philosophical zombies, and therefore are having difficulty understanding how a philosophical zombie is conceived to be different from how you are conceiving human beings to be. "

I think by consciousness Chalmers means the features that a human being has which are not reducible to features discussed in physics.  That is not an assumption that they are not physical features, just that they are features not reducible to the features discussed in physics. Mental phenomena, in other words all that you are aware of, with what you are aware of being interpreted in a way that is compatible with Berkeley's idea of there being no material world. So avoid interpreting it in a way which assumes a physical world. Or can you not imagine such an interpretation of what you are aware of?

Yours sincerely,

Glenn


2016-11-10
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I am afraid that I do not really follow your categories because you are using the sort of ambiguous language philosophers have built for themselves When neuropsychologists talk about these things the meaning is clear because an unambiguous language is used. 

With regard to a machine that passes a Turing test I would repeat my point that 'what it is like to see red' has nothing to do with 'how something behaves when it sees red'. That is for the simple reason that experience just involves the current input of information whereas behaviour arises from an interaction between that input and a pre-existing complex, variable, dispositional state. So Dennett's conflation of the two is just silly. As an example one might ask what is it like for White in a chess game to experience Black moving his knight to queen's pawn three. White's experience in this analogy does not include the positions of any of his own pieces - one might take it to include the positions of Black's pieces perhaps. White's behaviour will be totally different depending on the positions of his own pieces. 

So questions about whether some system will host somewhere inside it an experience with certain similarities to ours has nothing to do with its 'intelligence' or behavioural characteristics.  

2016-11-10
RoboMary in free fall
Reply to Derek Allan

Hi Glenn

I’m getting a bit lost in your questions. Let me restate my position once again and we’ll go from there. 

Chalmers statement re zombies is: “Zombies are hypothetical creatures of the sort that philosophers have been known to cherish. A zombie is physically identical to a normal human being, but completely lacks conscious experience. Zombies look and behave like the conscious beings that we know and love, but "all is dark inside." There is nothing it is like to be a zombie.”

Now, leaving aside obvious time-wasters like “creatures of the sort that philosophers have been known to cherish” and “all dark inside”, we are basically left with two propositions:

(1)  A zombie physically identical to a human but is minus human consciousness

(2) There is nothing it is like to be a zombie

Now (1) could only be a meaningful proposition if one knew what consciousness is. Since we don’t – unless we draw on some other source (which would raise other questions) – number (1) is obviously dead in the water.  Put briefly, one cannot subtract an unknown, even notionally.

Number (2) is the sad old Nagel thing again. I’m not going to go into that here but take it from me, the “something it is like" proposition is totally vacuous and useless.  (If you don’t want to just take it from me, I’ll post the proof; otherwise I’ll spare you. And by the way I’m not the only one to have said the Nagel thing is gibberish).

So, there you are. The zombie thing is a load of codswallop. Always was; always will be. (There are a couple of obvious flaws in it but I’ll leave it there for now.)  

You also say: ”I think by consciousness Chalmers means the features that a human being has which are not reducible to features discussed in physics.”

Perhaps he does, but you can’t even get that from the zombie thing. As far as Chalmers’ statement is concerned, consciousness is an unknown. Now, an unknown is an unknown, which means it could be anything. It could even be zero. In which case, in terms of the Chalmers statement, consciousness might in fact be purely physical.

It’s worth bearing in mind by the way, that there is, after all, no such thing as a zombie. They only “exist” in Hollywood B grade movies, nowhere else; and their only distinguishing features there are usually things like a slightly vacant look and an odd gait. It always strikes me as a sad - even embarrassing - reflection on the quality of analytic philosophy that it has been willing to take this zombie thing seriously. Small wonder so little progress has been made in this area despite the decades spent on it!

DA




2016-11-11
RoboMary in free fall
Reply to Glenn Spigel
Sorry, that should have read "There are a couple of other obvious flaws in it". 
DA

2016-11-13
RoboMary in free fall
Hi Jo, 
Could you help me out then and point out the ambiguity you are finding with the relational categories that I was mentioning, perhaps starting with the first (sorry I thought it was clear, but at least we are making progress) :

Category 1 (Contextual Relation): The conscious experience relates to what the underlying (which could be a dynamic interaction of forces) represents given the context. So using the example in the original post the robot would consciously experience red in the first room, and blue in the second, and a switch between red and blue in the third, because that is what processing of the 255.0.0 signal represented in each of those contexts. Same signal in all three cases, and same processing, but a different representation based on context.
Regarding the robot passing the Turing Test as I had written just above it (http://philpapers.org/post/23426):

You wrote earlier in http://philpapers.org/post/23194:

"The problem with dynamic units within a robot experiencing like those in us is not really one of analogue and digital. That is a distinction that only really applies to our devices. The more relevant issue is the number of degrees of freedom. Someone with red-green colour blindness has one less degree of freedom to their experience than the rest of us. That is something we can show to fit the empirical evidence best. Whether red-green colour blind people see red as red or brown is I think an unknowable."
Would the passing the Turing Test not indicate that the robot had the number of degrees of freedom? I thought it would, and that is why I had written:

So imagine the processing in the Mark 19 used logic gates, and could pass the Turing Test, and make distinctions at least as good as a human (which would presumably require it to have at least the same number of degrees of freedom). Would you be considering it possible that it could have conscious experiences of trees and cars etc., in a similar way to you?
But now you have stated  http://philpapers.org/post/23426 that:

... questions about whether some system will host somewhere inside it an experience with certain similarities to ours has nothing to do with its 'intelligence' or behavioural characteristics.   
So what has it to do with (I had thought that behavioural characteristics equated to the dynamic relation of the forces)?

Yours sincerely, 

Glenn

2016-11-13
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I think I have got to the end of the line with the relation classification. I will pass on that.

An experience seems rich, and therefore to be one of many other possible experience (the jug could have been blue, or there could have been two or...). Richness implies many degrees of freedom - many ways of varying in different respects. 

The mistake is to think that these degrees of freedom apply to some complex system of sequential and parallel dynamic relations like a whole robot or a whole brain. To obey the laws of locality within dynamics and for physics as practiced by observation to make any sense, an experience has to be instantiated in a single dynamic relation with all the necessary degrees of freedom. So an experience needs to be explained by some very local event in a brain and if there are experiences in robots they would also need to be single local events. This is basically what Descartes worked out and was right about. In a brain there look to be events within dendritic trees of neurons where up to 50,000 independent signals, each with a degree of freedom, contribute to a single computational event. That looks pretty good for an experience of a sunset or still life or an orchestra playing. In robots built with semiconductor gates the largest number of degrees of freedom for the input to any one computational event is 2. How could that have enough variability to be either a sunset or Brahms' third symphony? 

Intelligence and human and robot behaviour patterns are functions of the whole complex computational system as the outcome of masses of simultaneous and sequential events. That has nothing to do with what an experience is like. Dennett's suggestion that behaviour can be equated with experience is simply ridiculous. Chalmers completely missed the point too. You would not get fading qualia by replacing one neuron at a time. Each time you replace a neuron with a mass of silicon gates you lose one copy of rich experience. But the intelligence and behaviour of the robot you get with replacing all cells is exactly the same as that of the original brain.

2016-11-13
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
You wrote:
You also say: ”I think by consciousness Chalmers means the features that a human being has which are not reducible to features discussed in physics.”

Perhaps he does, but you can’t even get that from the zombie thing.

I think you can if you go for a charitable reading, as I have been taught to do, and which I think it makes sense to do. Although not explicit I assume Chalmers meant by "physical" something compatible with the metaphysics outlined by David Lewis when he described materialism as the 
‘metaphysics built to endorse the truth and descriptive completeness of physics more or less as we know it’
If you take that reading then consciousness would refer to the features that a human being has which are not reducible to features discussed in physics. Other definitions of the physical could include those features as physical features and not consider physics to be descriptively complete. So while different philosophers may use the same terms differently, what I am inquiring about is whether you are aware of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics?

You wrote:

(1)  A zombie physically identical to a human but is minus human consciousness

(2) There is nothing it is like to be a zombie

Now (1) could only be a meaningful proposition if one knew what consciousness is. Since we don’t – unless we draw on some other source (which would raise other questions) – number (1) is obviously dead in the water.  Put briefly, one cannot subtract an unknown, even notionally.
Number (2) is the sad old Nagel thing again. I’m not going to go into that here but take it from me, the “something it is like" proposition is totally vacuous and useless.  (If you don’t want to just take it from me, I’ll post the proof; otherwise I’ll spare you. And by the way I’m not the only one to have said the Nagel thing is gibberish).  

Regarding (1) Chalmers used the term "conscious experience" in the text you quoted (in post  http://philpapers.org/post/23450) which I do not find ambiguous. Furthermore he goes onto give further hints about what he means:
There is nothing it is like to be a zombie
and

Zombies look and behave like the conscious beings that we know and love, but "all is dark inside." 
While for comic effect you relate the idea of philosophical zombies to zombies in the films, presumably you understand the difference because as that last quote shows Chalmers explicitly states that the philosophical zombies "look and behave like the conscious beings that we know and love". Chalmers places the "all is dark inside" being in quotes because if it was "all is dark inside" then it would not be a zombie. Since you have acknowledged being able to understand mythological descriptions of an afterlife, imagine one where all you will be aware of is of darkness, or the colour black if you prefer. Can you do that? If you can then minus the awareness of the colour black, so that there is no awareness of anything.  Can you do that?

Regarding (2) even though I understand what Nagel meant, and like many do not think it is useless, I would be interested in your "proof" that we do not. If you simply mean that it can be misunderstood, rather than cannot be understood, then fine, I do not doubt that with effort most things can be claimed to not be understood.

Yours sincerely,

Glenn



2016-11-13
RoboMary in free fall
Jonathan, (please excuse the informality, but I'm not sure of proper etiquette here),
You stated
"Each time you replace a neuron with a mass of silicon gates you lose one copy of rich experience."
How do you know this?  You also say 
"Dennett's suggestion that behavior can be equated with experience is simply ridiculous."
That's not much of an argument. Exactly why is it ridiculous? Also, I'm not aware of where Dennett makes that equation.  What if behaviour is a necessary outcome of experience but isn't the whole story?  Finally, you say
"But the intelligence and behaviour of the robot you get with replacing all cells is exactly the same as that of the original brain.
So you would end up with a p-zombie, a creature/robot/agent that will swear up and down that it experiences the richness of a beautiful sunset, the heartbreak of a child caught in a war zone, the appreciation of nuance in a well-played Mozart piece. If we tell it that it doesn't have "experience", then it may respond "well, I certainly have something which matches all of the descriptions of experience in the literature.  If it is not experience, what is it?  Do you have it?"

*

2016-11-13
RoboMary in free fall
Dear James,
My suggestion that each time you replace a neuron you lose one copy of experience was based on arguments given before about where one might find a local event with enough degrees of freedom in the causal input of relevance to neural information processing as we know it. We are not in a position to confirm the proposal empirically, I admit. But like lots of things in science one can choose one model rather than another on the basis of the internal consistency of the theory (a point Popper made). As I think I indicated earlier any model that spreads an experience over lots of cells looks almost certain to be incompatible with physics as we know it. A lot of neurobiologists seem happy to violate the laws of physics in this way by postulating certain non-local laws of generation of experience in nerve networks, but that would be more dualist than Descartes. Without some principled reason why the laws of physics should suddenly disappear inside heads it is seriously unparsimonious and unmotivated.

The comment about Dennett was again intended to be a simple conclusion from arguments carried over from before rather than an argument in itself - which it is clearly not. Dennett is very good at not quite saying what it is he is saying, tending rather to poke fun at straw men I think. But as I understand it he has claimed that there is nothing extra to explain about experience beyond what can be covered by disposition to behave in certain ways. The simple example would be that to see red is to be in a dispositional state where one tends to say 'I saw red'. As far as I understand Dennett his whole point is that the behavioural tendency IS the whole story, or at least the causal sequence within that leads to that tendency.

The intriguing thing about Dan's theory is that his multiple draft idea, which everyone tends to forget, is really very cogent. He thinks that signals carrying various versions of scenarios about the world, from sense organs or memory, are sent to lots of places in the brain, presumably so that versions with different emphasis can be computed over in various different ways so that we can produce all sort of different response to a scenario very rapidly. That is more or less exactly what I think, except that I suspect that it is not so much 'many drafts', as just 'many copies' since if we are trying to explain experience it does not seem enough to have model that just does fragments. So both Dan and I think that lots of places in the brain receive signals encoding our environment.

But the strange thing is that Dan does not then assume that this is where the experiences are. Where else could they be? Nothing else gets the signals that would provide the information. 

The other problem is that unless you explicitly think in terms of receiving information that can then be computed over in lots of different ways you miss out the fact that the output or behaviour will depend both on the experience and on the way the brain is set up at that point in time to compute over the experience in all sorts of different ways. Put simply, experience is input. Behaviour is caused by input intersecting with a pre-existing dispositional state that may be very transient. So behaviour does not even parallel experience. it is NOT a necessary outcome. It is a CONTINGENT outcome of any experience. So there IS more to the story than just the behaviour.

And you can show this on brain monitoring. If people are told to respond when they see a letter you will see a certain trace on a brain scan when they see a letter and not when you flash it so fast they do not experience it. But if you show them a number that they do experience the brain scan also shows nothing. Presumably if you then tell them to respond to numbers you get the reverse.

Your last point is a very fascinating one and quite correct. The robot that is constructed so that it mimics all the computational steps in our brains and therefore responds in exactly the same way to stimuli, including saying 'of course I realise that the boat people are suffering terribly' need not host any complex experiences of the sort we do. It is absolutely non sequitur that it should. There are already uncanny video projection systems of cartoon characters with voiceovers that appear to respond to your body language with clever friendly chat in exactly the way a human would yet are just programmed discs. Linked up to a sophisticated system for actually monitoring your movement I think one would be totally convinced there was a real person inside. But there is no need to suggest anything like our sort of experience inside.

This leads to the interesting conclusion that behaviour, however sophisticated, can never be proof of experience. Which begs the question, how do we now that there are experiences within our own bodies. What is the evidence? The evidence has to be the experience, which is only ever evidence for that subject. That might seem OK but to know that A is of a type B (that my experience IS an experience) seems to require some sort of comparison with a gold standard B to calibrate.  There has to be 'something that it is like'. There needs to be another experience to compare with, but there is none it would seem. The usual route of definition fails.

This is an extremely difficult problem to unpack but it seems that we have to say that if there is an experiencing subject that senses how much boat people suffer and it is connected up to a reporting mechanism it will report that experience but that in no way prevents a system with no relevant experience from processing information the same way, having been taught the rules of language and emotional response, from saying the same thing. All that one can say is that this human subject here now is not a zombie because it is experiencing - as is manifest to it.

That might all seem very speculative but there is an interesting practical spin off. That is that for the truly experiencing human subject we need to have a model for how it can lead to reports of its experience. And if there are lots of copies of experience in a brain at once that gets complicated because no one copy is entirely responsible for the report. Moreover, since experience does not get passed on to the next event, it is not immediately clear how an experience of freesias stimulates some other cells to activate the voice box to say freesias. In fact the only way I can see it can be done is for the signal to the freesia experience to be carried by an axon that branches to the bit of brain that is programmed to say freesias. So a report of experience is not directly cause by the experience. The only reason it seems to is that our brains develop by trial and error to constantly compare what is experienced inside with a parsing of their own account of it. We have a sort of 'free won't' situation, where the causal effect of experiences on reports is simply a corrective process to make sure they match up in retrospect. Occasionally we find it operating when we have a slip of the tongue.

An important implication of this is that we have no direct way of ensuring that our reports cover just one experience event at a time. That may well be why we start off convinced that our experiences are like high resolution photographs. In fact they are lots of rough drafts in series but whenever you check how clearly you see a detail your brain has already switched to focus on that detail. So the worst case account is that our experiences are not rich at all. They might just have two degrees of freedom. But that is really not credible because we know roughly how often they could shift (~20msec) and that you would need to pack at least 100 degrees of freedom into that time frame to get what looks something like high resolution.

You then come back to the question about how we can know that an experience of this richness is a single event rather than an accumulation of a whole family of micro events in one or more neurons. That is where the law of locality comes back in. It is in a sense a metaphysical argument, but it is based on the sort of case from parsimony that we use all the time in science and if you allow experiences to be aggregates of events it is very difficult to see how you can avoid an invite regress in time.

How's that for a quick summary?!

Jo

2016-11-13
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

RE: “If you take that [charitable] reading then consciousness would refer to the features that a human being has which are not reducible to features discussed in physics.”


“Charitable reading?” Never liked that phrase much. It’s often equivalent to:  let’s assume nonsense is not nonsense. But OK, for argument’s sake let’s say one can get that from the “zombie” idea. (I’m quite sure one can’t, but let’s just suppose it.)

Then so what?  It’s always been a possibility that consciousness is not reducible to physical features, and many people have thought so for yonks. In fact, it’s possibly the majority view.  Why would we need a juvenile “zombie” idea to tell us it’s a possibility (assuming it even does – it doesn’t in fact: it tells us nothing whatsoever.)  The zombie thing certainly doesn’t prove it’s the case – or even prove that it’s a possibility.

One simply cannot get away from the fact that in the “zombie” formulation there is no indication whatsoever of what the term consciousness signifies. It is a conceptual blank. That being so, one is being asked to subtract something that is utterly unknown – which is a patently absurd. Once one sees this – and it took me about 3 seconds, so it’s not hard – one realises one is wasting one’s time on absolute nonsense.

RE: “Chalmers used the term "conscious experience" in the text you quoted”


So what? Simply using a term means nothing in philosophy unless one says what one means by it – especially in the case of a term as elusive as consciousness.

RE: “While for comic effect you relate the idea of philosophical zombies to zombies in the films.”


No, not just for comic effect. I do so because that’s where the idea comes from. Given that the idea of a so-called “philosophical zombie” is nonsense – which it is, as I’ve pointed out – the only other possible way of giving some meaning to the term “zombie” is to relate it to the Hollywood versions. Now, many philosophers in this area seem to be avid fans of juvenile Hollywood fantasies (brains in vats etc) so it’s very likely they do have Hollywood zombies in the back of their minds. (Chalmers “all dark inside” is pure Hollywood, for example). Need I say more? Once philosophy descends to tripe like that, we can all just fold our tents and go home.

RE: “If you can then minus the awareness of the colour black, so that there is no awareness of anything.  Can you do that?”


But this simply poses the problem you’re seeking to solve: what is meant by awareness (consciousness etc.). The very notion of “awareness”, or indeed of anything, may be quite meaningless where death is concerned (as might be the word “meaningless” itself). This line of thinking takes us nowhere. Again, think of Hamlet.

Re: “Regarding (2) even though I understand what Nagel meant, and like many do not think it is useless, I would be interested in your "proof" that we do not.’


My proof is not that we don’t “understand” what he meant. It is proof that the Nagel proposition is total gibberish. I’ll post it later.

DA

2016-11-14
RoboMary in free fall
Jo,
Thank you for the extensive, um ..., quick summary.  I'm going to try to focus on one point at a time.

You said 
"As I think I indicated earlier any model that spreads an experience over lots of cells looks almost certain to be incompatible with physics as we know it.'
I think this needs explanation. I'm not sure what you mean by spreading an experience over lots of cells.  I'm assuming you don't mean that an experience cannot involve a series of neurons firing, as that seems necessary for any visual experience to happen.  Are you saying that an experience can't be the simultaneous firing of neurons in parallel?

It seems like you're suggesting "an experience" has to be an instantaneous thing that occurs in one precise location.  Why cannot an experience be a collection of events that occur over time? Why cannot the experience of a sunset be a combination of the separate experiences of the blue sky over there and the red cloud over here and the bright spot of the sun over there plus all the other sensations that might be happening?

*

2016-11-14
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

As promised here is my refutation of the Nagel mantra ("There is something it is like to be conscious") I call it a mantra because it is repeated over and over in just this form, quite unthinkingly, as if were a sacred text. I’ve copied it from the thread Human consciousness and evolution where the issue also came up.

Having offered to give an explanation of why I think the “something it is like” mantra is useless, I thought I should put my money where my mouth is and give it. I’ve done so a couple of times before on other Philpapers threads and been largely met with blank looks (metaphorically speaking). I do hope that doesn’t happen here. [Alas, it did...]

The proposition we have to deal with, in its time-honoured form, is: “There is something it is like to be conscious.” This, according to numerous philosophers, great and small, tells us something important about the nature of human consciousness. It’s often invoked by philosophers of consciousness when they make their introductory moves defining what they’re talking about. It seems to have widespread acceptance and approval. But let’s just have a little look…

First, if we are serious philosophers, especially serious analytic philosophers, we’ll surely want to be clear about the meanings of the words we’re using. So what do we mean by the words in the proposition in question?

Clearly, the word we really have to focus on – the word that’s crucial to the proposition – is “like” (the rest are quite straightforward). So what do we mean by “like” here?

If we think about the range of meanings the word “like” can bear, there seem to be two possibilities in the context (please correct me if I am wrong). There’s the “like” of comparison or similarity, as in “like a diamond in the sky”; and there’s the like of “feels like” as in “I feel like a cup of tea”. The proposition in question could bear either of these meanings (which is a problem in itself, but I’ll come back to that). So “There is something it is like to be conscious” could mean “There is something it is similar to to be conscious” or “There is something it feels like (doing etc) to be conscious”.

(I should interpose here that someone on another thread once pointed out to me that Nagel ruled out the first alternative in a footnote to his article, but let’s keep it in for the time being for the sake of completeness.)

So, taking each proposition in turn, what firstly can we make of “There is something it is similar to to be conscious”? The obvious response to that statement is “Really? So what is it similar to?” And there’s the problem. Someone might perhaps answer “awareness” or “perception” or “mindfulness” or some such, but that’s just a little game of near-synonyms taking us nowhere. And obviously the mere fact of being similar to something is of no consequence since just about anything can be said to be similar to something else.

So the like of similarity/comparison gets us nowhere. When we insert this meaning, we end up shrugging our shoulders at the vacuity of it all. (And, as I say, Nagel himself ruled it out anyway.)

So let’s try the other meaning.

In this case, “There is something it is like to be conscious” means “There is something it feels like (doing) to be conscious”. It’s not a matter of similarity now; it’s “like” as in inclination – e.g. (feel) like having a cup of tea.

So, what could it feel like (doing, being, etc) to be conscious? The question borders on the absurd, doesn’t it? Obviously, it could feel like anything – from having a cup of tea, to going on a holiday, to slitting one’s throat.

So the like of “feels like” get us nowhere as well.

AND THOSE ARE THE ONLY TWO POSSIBLE MEANINGS OF “LIKE” IN THE CONTEXT. THERE ARE NO OTHERS. (Again, pls correct me if I am wrong).

This is why I say that the proposition in question is vacuous. Both possible interpretations of the word “like” take us nowhere. They give us a proposition that is either empty or near-nonsensical.

Well, one might say, why doesn’t this fact seem readily apparent when we first encounter the proposition? Why have so many people been taken in by it and claimed it meant something important?

I think the answer lies in the point I mentioned above – that, due to the odd phrasing of the proposition, “like” can bear two different meanings. Unless one undertakes the kind of analysis I’ve given above (the kind of analysis that should, surely, be almost instinctive for a philosopher – especially one of the “analytic” persuasion), one is easily bamboozled and led to believe that something deep and important is being said. (“Gee, yes! something it is like.”) This becomes obvious once you straighten out the syntax. If I say “Being conscious is like something” or “Being conscious feels like (doing) something”, one is immediately more wary. “What is it like?” one immediately wants to say? Or “What does it feel like doing?” The twisting of the syntax in the time-honoured formulation blurs the two meanings and deflects these obvious and quite sensible reactions.

If anyone thinks this reasoning goes wrong somewhere, I invite them to tell me where. But please don’t tell me I’ve analysed the proposition too closely – as I seem to remember someone once did. That reaction doesn’t befit a philosopher worthy of the name. And if you think the “like” in question means something different from what I’ve said, please tell me what that meaning is, how it fits the context, and why it makes the proposition is question important.  Oh, and please spare me too many references to Nagel's article. As I said in a recent post, I've seen multiple different interpretations of what he's saying, and in any case it's the "something it is like" formula itself that philosophers mostly rely on. Nagel is just mentioned (occasionally) as "authority" for it.

Apologies for the length of this. No way really to make it shorter.

DA



2016-11-14
RoboMary in free fall
Hi Jo, 

You wrote in post  http://philpapers.org/post/22574 :

Present day philosophers seem more interested in putting up barricades to ensure they never change their views.
So I am surprised to read that you have come to the end of the line with the relation classification, given that earlier for a few posts you wrote that you did not understand what was meant by relation, then for several posts you could not follow what the categories meant, claiming that they were ambiguous, then when asked about the ambiguity you seem to have simply backed out offering none. So it seems to me that you are putting up barricades to preserve your view, since it is hard for me to believe that you were not able to guess the differences between those categories, especially given that there were examples. But if you did not want to make an effort to understand, by taking a guess for example and asking whether it was correct, then that is up to you, you have your reasons I guess. The funny thing is that the issue they are used to raise is a key to understanding a problem with any suggestion of an undesigned universe as I assumed you were proposing.

Regarding your theory about a unit, it seems arbitrary as to what a unit is. In each case (with the logic gates and the neuron) there are multiple atoms involved, if it was just an issue of proximity, then you could have imagined the logic gates implemented at the atomic level, using some kind of nano-technology.

You mention:

In a brain there look to be events within dendritic trees of neurons where up to 50,000 independent signals, each with a degree of freedom, contribute to a single computational event.
But does the neuron not just fire or not depending on those signals, or does it have the versatility of response that the human has regarding answers to questions for example?

Yours sincerely, 

Glenn  

2016-11-14
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

I read your other post http://philpapers.org/post/23550 too, but am sorry to say I did not find it convincing. I think the "like" was the first one, a like of comparison. All comparison compares features of our mental experience (as Berkeley pointed out they are the only features we know, though we can imagine a material world with features like the mental features, such as extension etc.). The only features of reality you know are consciously experienced. 

I do not think the expression "there is something it is like to be conscious" is supposed to compare consciousness to something other than consciousness as you seemed to suggest. 

Regarding the comparison of philosophical zombies to film zombies, they do not involve the same concept, so the comparison seems silly.  With the film zombie the body is supposed to be dead (no heart pumping etc.), but that is not the case with philosophical zombies. 

Last post I asked you:

So while different philosophers may use the same terms differently, what I am inquiring about is whether you are aware of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics?
But I did not notice you reply, if there are any, perhaps you could mention what they are.

Yours sincerely, 

Glenn 


2016-11-14
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “I think the "like" was the first one, a like of comparison”

As I mentioned, Nagel – who is the source and authority for all this – ruled that meaning out. That doesn’t bother you at all?

RE: “as Berkeley pointed out they are.. etc”

This is not about what Berkeley of whoever may have thought about comparisons or whatever. It’s about a quite specific statement and what, if anything, it might mean. I can see you haven’t read my refutation carefully but, don’t worry, you're in good company. I’ve posted it on 3 or 4 threads at different times and so far I haven't encountered anyone who has been able to follow the argument. (Though it’s not at all difficult - just a bit of straightfoward, run-of-the-mill philosophical analysis.)  

RE: I do not think the expression "there is something it is like to be conscious" is supposed to compare consciousness to something other than consciousness as you seemed to suggest. “

So what are you saying? That consciousness is being compared to itself? That would be enlightening! Like saying “This apple is like itself.”  

RE: “With the film zombie the body is supposed to be dead (no heart pumping etc.), but that is not the case with philosophical zombies. 

Interesting. Philosophical zombies have no brain (because, gee, heck, they’re “all dark inside”) but they do have a functioning heart. The anatomy of philosophical zombies is truly fascinating. Can they leap tall buldings in one bound do you think?

RE: What I am inquiring about is whether you are aware of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics?”

I’m not sure I understand the question. Are you asking if I think there are any features of the human body that cannot be explained in purely physical terms? It’s very possible human consciousness is not explainable in purely physical terms. But who knows? Since no one has even come close to giving a convincing explanation of what human consciousness is, and the relevant area of philosophy is bogged down in puerile ideas like “zombies” and brains in vats, the answer to your question is: so far, it’s anyone’s guess.

DA


2016-11-14
RoboMary in free fall
Dear James,If the idea that an experience cannot be spread over lots of cells needs an explanation I would recommend chapter 6 of William James's Principles of Psychology on 'The Mindstuff Theory'. Lots of other people have made the point but James uses simple ordinary language to reach the conclusion that an experience spread out over the various parts of the brain would not be 'a physical fact'. For anyone not used to tracing causal chains in biology as part of scientific research it might not be immediately apparent what is meant by this but to me as a biologist it seems clear and cogent.

Before trying to unpack 'physical fact' let me just deal with the neurons firing. You need lots of neurons to fire to produce enough independent signals to generate a rich experience. However, these signals will cause an experience in something that they all signal to. Unless they all signal to the same thing there will be nothing that gets all the signals so nothing to experience their combination. The firing itself cannot be an experience because by 'the experience had by X' we mean the way the world influences X, not the way X influences the world. 'Cell firing' is an account of how a cell influences the world.

James Blackmon has a nice essay somewhere on the net about the paradox of multiple cells being involved in an experience. If you suggest that parallel events in cells A,B,C,D,and E in a brain together constitute an experience then there is no principled reason to exclude cells G and H in that brain or cells P and Q in the brain of the person in the next house. They are all spatially separate events. The standard thing is to then say that events in A,B,C,D and E are 'together' because of some pattern of connection between these cells. But connection is only relevant to sequences of events, not parallel events. Moreover, if you try to add together sequences of events in connected cells you get overdetermination and an infinite regress in time. You cannot add the signal from A to B to the one from B to C because the one from B to C will be dependent on the previous one from A to B - and so on for ever. 

It might seem arbitrary to raise these objections, although for many people William James's way of saying this is crystal clear (I often feel that my trying to add ends up less clear). However, it is not arbitrary. It reflects an aspect of physics that is so fundamental and intuitive that nobody talks about it - what is called locality. Physics presumes that there are events that connect in dynamic, or causal, sequence. Parallel events are not connected in any sense. Special relativity showed that this is not just a way of looking at things. It is an empirical fact. In a certain sense there is no ontology beyond epistemology - all that exists is passage of information. 

Classical Newtonian physics does not stipulate the grain of 'events' because its maths assumes that it is always dealing with aggregates of events that if you investigate more deeply will always prove to be composed of finer scale events, on to the infinitesimal. So although you can treat the earth as having one 'centre of gravity' the assumption is that this is a mathematical trick and that it reflects the separate effects of an infinite number of material components each acting exactly where they are. Newton was embarrassed by the fact that this action did not in fact appear to be local - it could work 93 million miles away - and again it took Einstein to restore locality with a better theory. Leibniz realised that actual reality ought to have a, monadic, grain of real events. He could not work out how to describe these mathematically but he saw that their way of relating would be quite different from intuitive ideas of mechanical causation. Quantised physics has given us the grain - the quantised mode of excitation - and agrees that the way events relate is very unexpected (for all except Leibniz).

So at last the common sense idea that all events happen at the place and time that they happen and always in a specific sequence now has a rigorous mathematical base. It is a bit odd because each event is in fact extended in domain. However the relation between events is always determined by field values at defined points in spacetime and the sequence of relation between events is unique. We can in fact forget all the difficult maths of quantum field theory and just be reassured that at root the common sense idea that there are real chains of real events in physics is well grounded.

This is all relevant to experience because in physics all attempts to study how the world works involve chains of events, the last of which is a human experience. As people like Wigner pointed out, you might say the last event is the activation of a sensor in a videocamera, but to use that in physics someone has to look at the display screen. To interpret any observation we make the assumption that all events relate locally - and note that this has to include the final event of experience. If any event in the chain is allowed not to have an address in space and time the whole theory crashes like a house of cards. You have an absurdity like the fact that all conclusions from a syllogism with two incompatible premises are true. If you think about it we do normally insist that the experience is local. There is no point in trying to observe an eclipse of the sun a week late or on the wrong continent. All predictions for observations have a prescribed location in space and time. Normally we do not bother to stipulate more precisely than where the observer's body has to sit but for clinical neurology we do. We predict that an observer located in the brain will not observe a tap on the finger if the median nerve has been severed. All neurology assumes that locality applies deep into the central nervous system. Descartes used that to work out that observers ought to be in the pineal. He included a false premise (the apparent need for a unique rather than a paired site) but otherwise had the physics right.

The weird thing is that when neurobiologists start to write about consciousness they throw the law of locality out of the window. The say Descartes was wrong. They suggest that experience somehow emerges from lots of firing events in lots of places at once. This is completely incompatible with physics in any known context - for the reasons above. The reason why they do this is that they cannot see how experience could possibly be local, because of the same false premise that Descartes introduced. They assume that there is only one copy of experience at a time. And since we have good reason to think that experience occurs in both sides of the cortex and in several places in each it might seem that it has to be smeared over lots of cells. But the simple alternative is that there are lots of copies, each in an individual cell.

William James considered this idea in chapter 6. It is not at all new. However, he argued, reasonably on a Newtonian basis, that even a cell could not host one local event. Events had to be infinitesimal, or at least belong to atoms. However, the day has been saved by the demonstration in modern physics that individual events can actually have very large domains. Quantised units are not tiny. If anything they fill the universe - but all their relations have precise addresses in spacetime. The individual computational events in brains are instances of post-synaptic integration in individual neurons. There is good reason to think that in quantised condensed matter physics these can be real individual events. We want events that are big enough to get about 1,000 degrees of freedom in the input of relevance to signalling but not involving separate components that have to be seen as in sequence (thus more than one event). This is the obvious scale to choose.

Viewed in this way brain function is remarkably easy to understand. Each cell has a rich input that it experiences and a single spike output that contributes to the experiences of the next cells along. It could not really be anything else.

Another quick summary for you.

Best wishes

Jo


2016-11-14
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I am giving up on relations not to defend a particular view on what the relation is' but because you have failed to take on board a series of problems I have raised. Your question seems circular. The causal relation between A and B will be causal, the functional relation will be functional, the symbolic will be symbolic. They are not alternatives. They are different accounts of the situation that use language in different ways that involve words acquiring different sorts of meanings. So 'seeing red' or 'being blue' may have different meanings if considered in functional terms or causal or symbolic terms. And since, as I have said, there will never be a demonstrable fact of the matter what it is like for the robot in each case there is nothing achieved.

In terms of dynamic units, as indicated above to James, this is no longer arbitrary in fundamental physics and it is not arbitrary in neurology because we can make a case for what combination of signals is likely to have its content reflected in reported speech. There are some caveats but to a first approximation we would expect it to be a set of signals all arriving at a site of integration and the only site of that sort in a brain is the dendritic tree of a neuron. As indicated to James the atom is not the relevant level. It is interesting to note that atoms are actually very rare things in our world - mostly inert gas units. Otherwise physics now recognises subatomic particles and molecules as dynamic units. The deep electrons relate to nuclei, the valency electrons to molecules; atoms do not really figure. Dynamic units in condensed matter physics occur at all sorts of levels so the task is to find the relevant ones at the level of content of experience as reported.

A neuron will fire in response to the pattern of input signals in the context of a state of responsiveness, which may or may not vary much in this context. The versatility of response of a human is just a reflection of the fact that brains contain lots of different banks of cells doing different jobs, contributing differently to any final answer to a question. So if you tell someone to say yes when they see a letter and certain cells in their head experience a number any outputs from those cells indicating that a number has been seen are ignored by cells controlling speech output. But if the subject has been told to say yes to numbers the speech controlling cells will activate the word yes.There is no suggestion that individual cells are making brain decisions on their own. This misconception arises from the common assumption that the seat of experience must be the seat of 'agency'. We do not have any such concept for a computer because once you get to serious theoretical computational models the idea becomes totally implausible. In fact there is every reason to think that in quicker situations like a rapid sequence of frames in a psychology experiment or University Challenge that a 'yes' response or a buzzer press occurs before the subject is aware of the basis for their response. As indicated to James, experience probably does not cause reports in real time.

2016-11-15
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

Just a further thought about zombies. You say: “With the film zombie the body is supposed to be dead (no heart pumping etc.), but that is not the case with philosophical zombies.”

So, I asked you if you thought the philosophical variety could leap tall buildings in one bound, but I now think that’s possibly going too far. (I think I’m getting my Hollywood fantasies mixed up. Isn’t that Spider man or maybe Grasshopper man?)

Still, the brain is the most energy-hungry organ in the body and if the philosophical zombie doesn’t have a brain (being “all dark inside”), s/he must have energy to spare, don’t you think? (Note the gender-neutral language; we should be aware that philosophical zombies may have feelings.) So maybe s/he would only need to sleep, say, every two or three nights, whereas we need to sleep every night. Now, I realize that’s hardly a super power, but it would surely give them an advantage as they go about their nefarious activities, like taking over the world and wiping us all out, don’t you think?

On the other hand, that outcome would certainly solve the question of the nature of human consciousness. Since we would all have been kiled off by the zombies, the problem would no longer arise (they being, as we know, “all dark inside”).

Philosophy is a wonderful thing, don’t you think? It explains so much.

DA


2016-11-15
RoboMary in free fall
Hi Jo, 
I think you have failed to understand that the categories of relations I outlined are not

different accounts of the situation that use language in different ways that involve words acquiring different sorts of meanings.


They are all different suggestions about the type of relation between the activity and the conscious experience. This I thought was evident from the examples. But you seem to have misunderstood, and somehow thought that what is meant by activity or conscious experience somehow changes depending on which answer you give for the relation type between them. Presumably if you look back at the examples you will be able to see the suggested relation leads to a difference in suggestion of what the robot is consciously experiencing (where there is no need to know the relation in order to understand the description of the conscious experience) . You write:

And since, as I have said, there will never be a demonstrable fact of the matter what it is like for the robot in each case there is nothing achieved.


The robot is only being used as an example, what is really being asked is the type of relation between the relevant activity in your theory (a single neuron's activity in your case) and what you or I conscious experience. You and I both know what we individually experience, and we both know the fact of the matter regarding what we are consciously experiencing. Anyway since you seem to have misunderstood, does me clearing it up mean that you are willing to continue, or is there another reason you do not want to?

In terms of dynamic units, as indicated above to James, this is no longer arbitrary in fundamental physics and it is not arbitrary in neurology because we can make a case for what combination of signals is likely to have its content reflected in reported speech. 


So with the neurons you decided that the neuron as a dynamic unit is not arbitrary as you claim the case is decided by of what combination of signals is likely to have its content reflected in reported speech, but you could have used the same criteria to determine the unit in an arrangement of logic gates. And as I mentioned the logic gates could be implemented using nanotechnology if proximity also came into play, and I assume it does because you seem to be basing your idea on the seat of the conscious experience being a single neuron, in order to solve the binding problem, but there is still the issue of how the activity of the various molecules is bound in an experience, and the proximity seems arbitrary.  

You write that there is a misconception that
arises from the common assumption that the seat of experience must be the seat of 'agency'. 
But if the seat of experience is not the seat of agency, and the seat of experience is a neuron, then how is there a report on it's experience? So when you write:
Dynamic units in condensed matter physics occur at all sorts of levels so the task is to find the relevant ones at the level of content of experience as reported.
What content of experience are you suggesting is being reported? Presumably the neurons do not share the same experience as they do not share the same connections or the same firings.

Yours sincerely,

Glenn


2016-11-15
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
The philosophical zombie does have a brain. Just not one with any features that are not reducible to the features referred to in physics. I had asked you earlier:

... what I am inquiring about is whether you are aware of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics?
To which you replied in post  http://philpapers.org/post/23562 :

I’m not sure I understand the question. Are you asking if I think there are any features of the human body that cannot be explained in purely physical terms?
To answer I would have to know what you meant by purely physical terms. By "features discussed in physics" I meant whatever you think the terms used physics equations refer to.

Also in that post in response to where I had written:

I do not think the expression "there is something it is like to be conscious" is supposed to compare consciousness to something other than consciousness as you seemed to suggest. 

You wrote:

So what are you saying? That consciousness is being compared to itself? That would be enlightening! Like saying “This apple is like itself.”  

I was thinking that the content of the experience was what could be attempted to be described (if it was known to someone that could attempt to describe it). No experience, nothing to describe.

Yours sincerely, 

Glenn


2016-11-15
RoboMary in free fall
Reply to Glenn Spigel

I’m finding your comments hard to follow, Glenn. But re the last bit, I wrote :

So what are you saying? That consciousness is being compared to itself? That would be enlightening! Like saying “This apple is like itself.”  

And you replied:

I was thinking that the content of the experience was what could be attempted to be described (if it was known to someone that could attempt to describe it). No experience, nothing to describe.

How does that relate? My point is simply that it makes no sense to compare something to itself. That is surely straightforward, isn’t it? One can hardly disagree with that.  How is what you’ve said relevant? 

DA


2016-11-15
RoboMary in free fall
Hi Jo,
Thanks for the William James reference. I read Chapter 6 (I so love the internet), and I think I understand the issue.  I will use the Blackmon example to explain.

Given that the neurons A, B, C, D, and E are firing together, you and James are saying that the simple fact of firing together does not constitute an experience. I agree.  I would say that they constitute an experience only if a conscious agent recognizes that pattern of firing and does something with it.  That "something" could be initiate an action or simply generate a memory. In this case, the conscious agent could be a single neuron, but it could also be a combination of neurons that act in concert. We could then say that the agent experienced A,B,C,D,E.  

Do you see any issues with this explanation?

*

2016-11-15
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn, I am afraid your categories of relation ARE just different ways of describing or conceiving relations. Whatever the actual dynamic relations are that determine how the world is they are what they are - but because of ascertainment and definitional linguistic issues we cannot just write down what they are. We have to use various forms of language construction which might be in terms of function or symbol or whatever and then terms like 'seeing red' depend on the way the usage defines the words' meanings. 
I am afraid I do not see much point in trying to re-express all this. You are taking language on holiday just as Wittgenstein said people do. You are expecting words to retain meanings in new contexts when they do not. 'Relation' is a very slippery word that can mean all sorts of things. Consciousness is a biological scientific problem. It might benefit from some metaphysical grounding but the popular philosophers of mind of the twentieth century are not the place to look for that. They do not have enough grasp of biology to know where to go for metaphysics and even less understanding of metaphysics anyway. (Deleuze and the continentals are even worse but forget that.)

Regarding dynamic units; Yes, we can apply this to logic gates but in a computer a single computational step or event only has two degrees of freedom - that is always the rule. So no rich experience. You cannot add gates together for the reasons I give to James, which come from the other (W) James. The molecules in neurons can be bound into a single dynamic unit through Goldstone modes that arise from spatial asymmetries in order parameters. That may sound abstruse but it is as kitchen sink as you get. A typical Goldstone mode is the vibration of a violin string. It is quantised just like an electron. Modern field theory has got remarkably familiar. It now deals with everyday life rather than subatomic particles. We know that modes of this sort occur in neurons. The simplest idea is that the cytoskeleton acts like the sympathetic strong in a sitar. If the incoming signals (akin to the positions of the fingers on the main strings) fit the right pattern the cell sings and fires. 

In terms of how an experience is reported if it is not the seat of agency I have given quite a detailed account for James. It is not too difficult to see that it has to be a sort of 'free won't' post diction relation in real time. And I am not suggesting that the totality of experiencing cells are not causing behaviour and reports over time. It is just that there is no single agent that has a single experience. Cells in banks with a similar level of function WOULD all have the same content of experience but remember that content is defined in relation to outside referent, because that is all we can relate it to at present. So there is no requirement that what it is like to be cell A is what it is like to be cell B, even if that were answerable in principle. The cells can respond in a consonant way as long a their responses are all consonant considered in terms of what outside referent they are representing.

2016-11-15
RoboMary in free fall
Dear James,I see a very serious issue. What is this 'conscious agent' and how does it 'experience signals' if it does not get those signals? There is no ghost made of ectoplasm floating about to recognise signals that neurons get. Only the neurons that get the signals can get them and recognise them, surely? Nothing in the brain can recognise cells firing in lots of separate places. The only possibility is something recognising the potentials generated when all those signals arrive at the same place (where the something is) via opening ion channels surely? That is called a neuron. Other neurons might well act in concert but they will have recognised another set of potentials - the one in their own domain. We have to have a coherent physical story - a 'physical fact' as W James says.

I find it very strange that so many people including neuroscience colleagues seem to accept that there is some 'person' floating about amongst all these neurons that can experience what they are doing. There are no agents or persons in science. These are terms of lay social chat. And what is even more weird is that the same neuroscientists deny that their is any Cartesian soul, yet Descartes' soul is a perfectly respectable physical theoretical postulate - a non-antitylic dynamic unit located in some small part of the brain.

2016-11-15
RoboMary in free fall
Jo, I think I answered your question very explicitly in my last post when I said the "conscious agent" could be a single neuron or a collection of neurons, possibly a very, very large collection of neurons.  

I will go out on a limb and guess that you will have trouble with the idea of conscious agents being combined into higher order conscious agents, such that when one refers to "an experience" one should be ready to define the physical boundaries of the conscious agent that is having that experience.  Thus, by my understanding, every neuron can be considered a conscious agent, and every neuron is a combination of conscious sub-agents (including at least, voltage gated sodium channels). And neurons can be grouped into super-agents, for example possibly cortical columns. The important point is that for any well defined agent there is a repertoire of possible experiences.  The repertoire of a single neuron is pretty small.  The repertoire of a specific brain region, say the hippocampus, would be quite large.

Also, I am concerned that you may have trouble with the idea that a conscious experience can take a while to occur and that not all the inputs have to happen simultaneously.  There is a very good demonstration of this in David Eagleman's book, The Brain.  The situation presented is a foot race that is started by the firing of the gun.  I think it is generally agreed that we "experience" the flash and the sound of a gun firing as one thing, i.e., a "unified experience" (at a close distance, anyway). But if you isolate the sight and the sound, it turns out we process the sound faster, even though the light hits our eyes before the sound hits our ears. This is demonstrated in the book by comparing reaction time to responding to just a light versus to responding to just the sound: responding to just the sound is faster. So I think what you have described as a "rich" experience does not have to be just one experience, or alternatively does not require all of the inputs arriving at one neuron at one time.

*


2016-11-15
RoboMary in free fall
Dear James,The problem for me is that you seem to be using the term experience in a way it is never normally used in English. If I see a sunset my experience is not the trees and sky out there, it is the event of something within my body receiving signals triggered by those distant objects. So it would be an aberrant use of the word to say that a group of cells firing was 'an experience' if the experience was had by something receiving the signals. The firing would be an antecedent cause of the experience of the receiver but not the experience. So I think all the stuff about flashes and noises is non sequitur. At some point some subject inside us gets signals that seem to indicate sound and flash together. The receding stuff is interesting neurophysiology but not an experience.

I have no problem with conscious units being combined into aggregates that also carry additional conscious units, at all scales. That is what Leibniz proposed and I think Goldstone theory gives us a good basis for believing that. An important caveat is that the larger scale conscious unit is not the sum of the smaller units but rather a new indivisible dynamic unit that 'supervenes' in trendy parlance' on the aggregate. There is no mereological sum to a new conscious unit there is only a mereological sum to a spatial asymmetry in an order parameter that Leibniz calls a 'body' whose existence entails (through Goldstone) the new dynamic indivisible. A simple way of putting this is that every indivisible dynamic unit is a mode of action and at least notionally all such modes of action have a de Broglie wavelength. So electron orbitals have De Broglie wavelengths, as do molecules, as do crystals , as does a bullet and as does the earth. But the de Broglie wavelengths are no wavelengths of some added up mode of action. They belong to new modes of action at each scale. So Fermi modes may sum to produce a body inhabited by a Bose mode.

Nevertheless, in neural terms this is all pretty uninteresting because there is only one level where signals representing the outside world integrate - the level of the individual neuron. And there is nothing limited about the repertoire of a single neuron - if it has 40,000 inputs that is quite rich enough for a sunset experience. There is no dynamic unit in a cortical column that can interact with more signals. There are no basic order asymmetries to entail dynamic modes. Walter Freeman and Guiseppe Vitiello have tried to propose some in 'neuropil' but my understanding of physics is that they simply cannot work. The equations are in principle unwritable and not just for practical reasons.

Put simply we cannot say that 'the US Postal System' receives and reads letters. It does not even receive- it moves things about. And it operates at a level where the meaning of the items sent is never divulged. Computationally what you are proposing to me looks a non-starter.

So I have trouble with models that don;t work or are irrelevant, yes, but I prepared to reach out to anything unfamiliar if it makes good dynamic and computational sense.

2016-11-16
RoboMary in free fall
Hi Jo, 

I seem to have failed so far to explain to you what I mean regarding the categories of relation that I mentioned. You seem to be thinking that the dynamic relations are what they are and that they determine how the world is (including what the conscious experience would be like), and then thinking that given that, with the robot in the examples, each different relation mentioned must be a relation between different features in order to be a different relation. Therefore the terms describing those features, even if they are the same terms, must have taken on different meanings in order to reference different features in order to be different relations.  And thus are just different accounts of the situation. Leading to you commenting that they are:

...different accounts of the situation that use language in different ways that involve words acquiring different sorts of meanings.

Do you think I roughly managed to understand how you were thinking I meant it? 

That was not what was I meant though. So let me use an analogy, imagine you do not know the relation between kinetic energy and the relativistic increase in mass, and I was asking you what your theory was, and because I did not want to make it too difficult for you, was just asking which category you felt the relation would fall into, and gave you three categories

1) the increase in mass equals the kinetic energy divided by a value lower than the speed of light
2) the increase in mass equals the kinetic energy divided by a value equal to the speed of light
3) the increase in mass equals the kinetic energy divided by a value greater than the speed of light

There is  a distinction between the relations in each category would be different, but they are not 

...different accounts of the situation that use language in different ways that involve words acquiring different sorts of meanings.

So with the analogy, and with the examples, do you feel that I have managed to explain it in a manner that allows you to understand what I am asking you? I can understand that the relation will be what it is, but I am asking you roughly what category you think it will fall into (like the answers on the relation between mass and kinetic energy they each have experimental implications).    

You wrote:

Regarding dynamic units; Yes, we can apply this to logic gates but in a computer a single computational step or event only has two degrees of freedom - that is always the rule. So no rich experience. You cannot add gates together for the reasons I give to James, which come from the other (W) James. The molecules in neurons can be bound into a single dynamic unit through Goldstone modes that arise from spatial asymmetries in order parameters. That may sound abstruse but it is as kitchen sink as you get. A typical Goldstone mode is the vibration of a violin string. It is quantised just like an electron. Modern field theory has got remarkably familiar. It now deals with everyday life rather than subatomic particles. We know that modes of this sort occur in neurons. The simplest idea is that the cytoskeleton acts like the sympathetic strong in a sitar. If the incoming signals (akin to the positions of the fingers on the main strings) fit the right pattern the cell sings and fires. 

I am not clear on why the abstract notion of a computational step (regardless of how it is implemented) should be significant but it is a side issue that does not really matter too much. Though from what you have written I assume I am correct in stating that you are of the mind that robots built up of logic gates cannot consciously experience in a similar way to us no matter what their behaviour. 

You also write:

Cells in banks with a similar level of function WOULD all have the same content of experience but remember that content is defined in relation to outside referent, because that is all we can relate it to at present. 

I am not clear what you mean by function. Normally I would have interpreted it to include the arrangement of the surrounding neurons and what type of function they play in relation what "causes" the inputs and what the outputs "cause". But you seem to be suggesting the seat of conscious experience is an individual neuron, so I had assumed that the neuron-in-a-vat would be the seat of the same conscious experience as long as the signals it received were the same. So I was thinking that you were suggesting that the chemical arrangement of the neuron, for example the amount and orientation of its dendrites, and perhaps the ordered dynamic state of the water in the perimembranous region of each dendrite were the type of features that would influence the experience. Did I misunderstand the type of activity that you thought would influence the conscious experience, or were you suggesting that the activity would be the same in all the "cells in banks with a similar level of function"?

I also do not know what you mean regarding content being defined in relation to an outside referent. I thought you were suggesting that if it would not matter if it was a brain-in-a-vat there could still be an conscious experience of tree etc. I do not see how what-it-is-like, which is what I mean by the content of the conscious experience is defined in terms of an outside referent. That there might be thought to exist physical objects corresponding to the mental objects of your experience seems to be a metaphysical assumption, and one which I would have thought would be considered to be wrong even if there existed a physical universe and your brain was in a vat and you were consciously experiencing being on some fantasy planet. I am happy with you assuming that if I mention consciously experiencing green you take it to mean the type of qualia that you associate with the word "green", or that if I mention consciously experiencing a tree, that you associate the word tree with the type of qualia that you associate with the "word" tree and your recognition as an object within your conscious experience and so on, but I would prefer it if you did not assume me to have suggested any outside referent.

Yours sincerely, 

Glenn

2016-11-16
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

The point is that the content of the experience could be described, the description is what it is like. So a surgeon could ask "what is it like when I prod this part of your brain" and the person could try to describe it "a blue patch appears" etc. That the conscious experience is in some way describable (at least in a way that makes sense to the describer) means that it is like something (the description). It is not being compared to itself it is being compared to the description. It is like the description. It might be that that which is conscious experiencing could not provide a description and human beings were not aware of what the experience was like, but if it was consciously experiencing then there would be descriptions which would describe (to some extent) what it was like if only you knew them.

So back to the question that you have not yet answered: Do you know of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics (the features being discussed in physics being whatever you consider the terms in the physics equations to refer to)? 

Yours sincerely,

Glenn




2016-11-16
RoboMary in free fall
Reply to Glenn Spigel
I am sorry to disappoint, Glenn, but all your questions seem to me to indicate that you have got caught up the sorts of language holidays that Wittgenstein warned us against. I cannot really say more. You seem to have lost touch with what the words would mean in terms of concrete causal dynamic relations and possible ways of communicating in natural language. Virtually all twentieth philosophers of mind have been lost in this fog so you would not be alone, but all I can recommend is to throw all the philosophical jargon you have acquired in the bin and look at the science. That does not in any way mean taking on a 'materialist' metaphysics though, since materialism is a naive metaphysics more or less only held by philosophers as far as I can see.

2016-11-17
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “That the conscious experience is in some way describable (at least in a way that makes sense to the describer) means that it is like something (the description). “

But you are talking about specific states of consciousness, Glenn. There are umpteen of those. Sadness, happiness, anger, disgust, discomfort, fear, the sensation (if any) when being prodded in the brain (your example) etc etc

Nagel’s proposition – or at least the proposition allegedly based on his article – is this specifically: “There is something it is like to be conscious”. Not: something it is like to be sad, happy, angry etc but to be conscious. It’s the state of consciousness per se that is in question. That’s what this whole debate is about – the nature of human consciousness – not what various different states of consciousness might be like. That’s no problem at all: novels for example will give you umpteen descriptions of sadness, happiness, anger  etc

I think I answered your other question in my email of 2016-11-14 – last para...

DA



2016-11-17
RoboMary in free fall
Hi Jo, 

For a few posts you stated that you could not understand what I meant by relation, then when I managed to explain it to you, for a few posts you were insisting that the relations I was mentioning were not alternatives but all different ways of talking about the same thing. Now when in the last post I illustrated, using a scientific analogy, that that was not the case, you are accusing me of being lost in a fog of "philosophical jargon". All I am asking is what physical activity, in your theory, would determine the conscious experience, and how variations in such activity (maybe you could give some examples) would lead to variations in the conscious experience. Is that put simply enough do you think, or do you still feel I am lost in some fog of philosophical jargon?

Also I in my previous post I asked some related questions about your theory, especially regarding your comment:

Cells in banks with a similar level of function WOULD all have the same content of experience but remember that content is defined in relation to outside referent, because that is all we can relate it to at present. 
In part of my response I wrote:

I am not clear what you mean by function. Normally I would have interpreted it to include the arrangement of the surrounding neurons and what type of function they play in relation what "causes" the inputs and what the outputs "cause". But you seem to be suggesting the seat of conscious experience is an individual neuron, so I had assumed that the neuron-in-a-vat would be the seat of the same conscious experience as long as the signals it received were the same. So I was thinking that you were suggesting that the chemical arrangement of the neuron, for example the amount and orientation of its dendrites, and perhaps the ordered dynamic state of the water in the perimembranous region of each dendrite were the type of features that would influence the experience. Did I misunderstand the type of activity that you thought would influence the conscious experience, or were you suggesting that the activity would be the same in all the "cells in banks with a similar level of function"?

But you gave no response to the question in your last post (as to what you meant by function for example and what kind of activity in the neuron you were thinking would determine what the conscious experience was like). Presumably you were not suggesting that me asking such questions was a case of me being lost in some fog of philosophical jargon. Is it that you would prefer not to discuss it any further?

Yours sincerely, 

Glenn

2016-11-17
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Just to be clear I was not limiting the idea of a description to what I would refer to as emotional states. A description of my consciousness at the moment might include that I am experiencing typing a response to you on a computer. 

I think all Nagel is suggesting is that if at any point in time a thing is conscious, then there will be at least one description that the conscious experience at that point of time will be like. 

Regarding the response to my question, I cannot see any email from you (if we have a forum email then I have not noticed it) . I did notice you originally state that you were not clear what I meant by the question in http://philpapers.org/post/23562 , and in  http://philpapers.org/post/23634 I mentioned that, and explained what I meant by "the features discussed in physics". So when I asked the question again last post I added the explanation in brackets. For your convenience I will write it again:

Do you know of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics (the features being discussed in physics being whatever you consider the terms in the physics equations to refer to)? 
 Is it possible to just answer it here in the forum, in case there was some email issue?

Yours sincerely, 

Glenn


2016-11-17
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: Just to be clear I was not limiting the idea of a description to what I would refer to as emotional states. A description of my consciousness at the moment might include that I am experiencing typing a response to you on a computer

Same difference, Glenn. You are still describing what you are conscious of, not consciousness itself. I could be conscious that I am flying to the moon; it is still not about consciousness itself.

Re: Is it possible to just answer it here in the forum, in case there was some email issue?

This is what I said:

’I’m not sure I understand the question. Are you asking if I think there are any features of the human body that cannot be explained in purely physical terms? It’s very possible human consciousness is not explainable in purely physical terms. But who knows? Since no one has even come close to giving a convincing explanation of what human consciousness is, and the relevant area of philosophy is bogged down in puerile ideas like “zombies” and brains in vats, the answer to your question is: so far, it’s anyone’s guess.

DA


2016-11-17
RoboMary in free fall
Reply to Glenn Spigel

Those questions are a bit more feet on the ground, Glenn. I have given my position before but can recap.


The simple answer is that the physical or dynamic relation that would determine what a particular experience ‘is like’ to the subject in the Nagel sense would be a relation of a field of potentials to a mode of excitation. I say that because that is the only relation in modern physics that has richness influencing an individual. I am suggesting the field of potentials is likely to be electrical, since we know signals about the world are encoded electrically in brains and pretty much all everyday events are determined by electrical forces. The mode being influenced is harder to be sure about but there are some suitable options.

 

The problem we have is that if we want to match variations in dynamics to variations in what it is like for the subject we have a principled ascertainment problem. We can reasonably say that we do not expect to be able, even in principle, to find evidence that what it is like for a subject can be different in situations where the dynamic field/mode relation is exactly the same. But we may also have to say that there is no way in principle of demonstrating that they ARE the same, because there is no relevant act of comparison. Any comparison is vicarious, invoking multiple subjects and dependent on use of language that is based on defining words by reference to outside events.

 

A straw that we may be able to clutch at, however, is that certain descriptions in language seem to be totally incompatible with each other. It seems very unlikely that what it is like for any subject in my brain to have a visual experience of four oranges is what it is like for a subject in your brain to have one of three oranges. We would expect whatever system might have systematic rules for representation would run into impossible problems if that were the case.

 

Inside a brain there are dynamic modes in cells that are influenced by up to 50,000 independent local potentials in a field that we know reflect outside events – they encode the events we have experiences about. That looks to be enough to cover the richness and variety of our experiences. There will be vast numbers of modes influenced by potential sin silicon based robots but none of those fields will encode a rich scenario reflecting outside events because binary computation only handles two bits of data at a time. If I look at a family photo album it is likely that the pattern of potentials arriving at certain types of cell will co-vary with the photos in a more or less one to one way as a ‘rich representation’. In a robot there is no equivalent dynamic relation. Moreover, if you are only ever integrating two binary bits there isn’t even the opportunity to encode colours as we know them. The data signal is likely to be 0 or 1 (the other signal bit determines how you compute over the data bit), which could be red or not red but that would be the same as blue or not blue.

 

What you might well be able to do is set up an integrating unit in a vat with the richness of input of a neuron. If you put a wineglass on top of a piano while someone is playing Liszt the acoustic mode of the glass that allows it to ring will have an input from every vibrating piano string. The mode will take up energy best if it is tuned to the key of the music but whatever the key the mode in the glass could be said to experience all the richness of the acoustic field it finds itself in. You could build a computer out of pianolas and wineglasses if the ringing of a wineglass tripped a change of piano roll in a bank of pianolas so that they started playing a new tune.

 

When I was talking about a bank of cells with similar function I was intending to imply a bank of cells that, for instance, might have the job of recognising faces, or setting up short term memories, or integrating object concepts with raw feature data or a whole lot of other jobs involved in the thinking process. So I would expect all cells that are there to recognise faces to be fed inputs that are specifically relevant to that job – physical features of a face but not a name. Further on once the face is recognised cells may get inputs that combine a sense of ‘that is granny’ with a summary of physical features etc. My model of how things work has to be considered in the context of different groups of cells doing very different jobs in this way. Asking ‘which group of cells has the sort of experience we report’ may be legitimate but if reporting involves a complex indirect process, as suggested to James, I think, then there may be no clear fact of the matter ‘which experiences are being reported’.  Again we have a very tricky ascertainment issue.


2016-11-17
RoboMary in free fall
RE: "is like’ to the subject in the Nagel sense"

What is that sense, Jonathan? i.e. what does "like" mean in the context?

DA

2016-11-17
RoboMary in free fall
Reply to Derek Allan
As well described by Jonathan Farrell in a recent J Consc Studies article it is a vernacular sense rather than a term of art, which can be traced back both in philosophy of mind and in ordinary literature for decades if not centuries, known to almost everyone, except of course philosophical zombies, despite the fact that they might claim to know, and perhaps zombic philosophers. As has been pointed out by linguists you do not ask what 'bucket' means in 'kick the bucket' nor even 'used' in 'I used to drink sherry' or 'just' in the non-philosophical vernacular usage of 'just in case'. We use words in groups in non-compositional ways. Beckmesser may have known the rules but he could not find a noble tune. If you gotta ask...

2016-11-17
RoboMary in free fall

Hi Jonathan

RE: “it is a vernacular sense rather than a term of art",

Term of art?  I’m simply asking you what you think the term “like” means in the Nagel context – as “vernacular”, “term of art”, or whatever you like.

For example, does it mean “similar to” as in: “This car is like that one”? Or does it mean something else?

It's a straightforward question; nothing profound.  Just needs a straightforward answer.

RE: “As has been pointed out by linguists you do not ask what 'bucket' means in 'kick the bucket'”

Perhaps - though a non-native speaker might well ask, as any "linguist" should know.

But if the word “bucket” happened to be used in a critical part of a philosophical argument (I can’t imagine how) a careful philosopher might well want to examine its meaning in the context. The meaning of the term “like” is obviously critical to Nagel’s proposition. You’re quoting him as part of your argument. I’m assuming you’re a careful philosopher so I'm asking you what you think it means. Again, nothing profound or mysterious. Simple question.

DA


2016-11-18
RoboMary in free fall
Reply to Derek Allan
Yes, but a naive question because any sensible ordinary person will tell you that language does not work like that and any competent linguist will agree. The Beckmesser reference was obviously lost on you.

2016-11-18
RoboMary in free fall

Hi Jonathan

RE: "any sensible ordinary person will tell you that language does not work like that"

Any sensible person will tell you that language means something, if it is not just gibberish.

I'm simply asking you what you think “like” means in the "Nagel" context.  You used it in that context, apparently quite confidently, yet you seem oddly reluctant to say...

DA



2016-11-18
RoboMary in free fall
Reply to Derek Allan
What does 'used' mean in 'I used to go fishing'? (I am not asking what 'used to' means, just used.)

2016-11-18
RoboMary in free fall

Hi Jonathan

You seem to be losing focus a little. There is no “used to” in the Nagel proposition as far as I can see.

Allow me to put us back on track. You said “…what a particular experience ‘is like’ to the subject in the Nagel sense…”

I simply asked you to tell me what “sense” you thought that was.  

You seem reluctant to answer. I assume you think there is a sense; otherwise you wouldn’t have used the phrase "in the Nagel sense". Unless, perhaps, you really think, as I do, that the Nagel sense of "is like"* is a non-sense?

DA

* Actually the canonical form of the Nagel thing, as I'm sure you are aware, is: "There is something it is like to be conscious".  The rather odd syntax is important to its advocates and I prefer to play on their turf, as it were, to give them every possible advantage. They need it.


2016-11-20
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

You wrote:

You are still describing what you are conscious of, not consciousness itself. I could be conscious that I am flying to the moon; it is still not about consciousness itself.


Yes the description would be of what the being was conscious of. To be conscious implies an experience which is what I assume you mean by "being conscious of something". Though I think it should be mentioned that you may not be aware of everything (or possibly anything) you are experiencing, in the sense that you could be conscious of a picture, but not notice some feature of it for example, or during sleep you could experience darkness for example, but think nothing of it and have only a short memory for example. So while anything you are conscious of is a conscious experience, you might not have been attentive to the whole (conscious) experience. I do not mean to complicate things there, but just thought I would point it out. Anyway, I think that what Nagel was suggesting was that at any point of time, if the form is conscious, it will be like something (a description of the experience) to be conscious (because to be conscious implies having an experience). If you just assume I am correct, do you find that the instances where people use that type of terminology now make sense to you where they did not before? If not could you provide an example?

Regarding the reply you gave in post http://philpapers.org/post/23562 and which you quoted again, I was aware of that as was made clear in my post (I quoted the link to that post, and a link to my response to it where I had given further explanation): 
Regarding the response to my question, I cannot see any email from you (if we have a forum email then I have not noticed it) . I did notice you originally state that you were not clear what I meant by the question in http://philpapers.org/post/23562 , and in  http://philpapers.org/post/23634 I mentioned that, and explained what I meant by "the features discussed in physics". So when I asked the question again last post I added the explanation in brackets. For your convenience I will write it again:
Do you know of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics (the features being discussed in physics being whatever you consider the terms in the physics equations to refer to)? 
Since I would rather not go into what you mean by purely physical, and that I had explained what I was happy with you taking the phrase "the features discussed in physics" to mean, could you explain to me what part of the question you did not understand, or thought was ambiguous? 

Yours sincerely, 

Glenn


2016-11-20
RoboMary in free fall
Hi Jo, 

You wrote: 

The simple answer is that the physical or dynamic relation that would determine what a particular experience ‘is like’ to the subject in the Nagel sense would be a relation of a field of potentials to a mode of excitation. 
and: 
When I was talking about a bank of cells with similar function I was intending to imply a bank of cells that, for instance, might have the job of recognising faces, or setting up short term memories, or integrating object concepts with raw feature data or a whole lot of other jobs involved in the thinking process.
But that does still not seem to answer the question that was in my last 2 posts to you:

... I was thinking that you were suggesting that the chemical arrangement of the neuron, for example the amount and orientation of its dendrites, and perhaps the ordered dynamic state of the water in the perimembranous region of each dendrite were the type of features that would influence the experience. Did I misunderstand the type of activity that you thought would influence the conscious experience, or were you suggesting that the activity would be the same in all the "cells in banks with a similar level of function"?

The reason I write that your reply does not seem to answer the question is that when you write the conscious experience would be determined by the "the physical or dynamic relation" I assume you are referring to things like "the chemical arrangement of the neuron, for example the amount and orientation of its dendrites, and perhaps the ordered dynamic state of the water in the perimembranous region of each dendrite..." since as I understand it your conception the neuron is the seat of consciousness, and all those features in your conception reduce to the features of dynamic relations between a field and a modes of excitation within the neuron. So while keeping the question from the previous posts in mind, can you answer the following questions:

1) Are you suggesting that there is a set of physical features that will be the same in each of the neurons in the functional bank of neurons, and that it is the dynamic relation's features that those physical features reduce to that determines the conscious experience to be the same for each of the neurons in the functional bank of neurons? 

2) If the answer to (1) is in the affirmative then what type of physical features were thinking of that would be the same in each of the neurons in the functional bank? 

(I had mentioned a few candidate features in my previous questions, but I do not know whether those were the ones you thought relevant)  

3) Are you suggesting that there will be a range of physical features that the individual neuron will have that would allow the function the neuron plays to be identified? 

Yours sincerely, 

Glenn


2016-11-20
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “Yes the description would be of what the being was conscious of.”

So, it is not about consciousness per se. And if it isn’t, it is of no help to us.

RE: …I think that what Nagel was suggesting was that at any point of time, if the form is conscious, it will be like something (a description of the experience) to be conscious (because to be conscious implies having an experience).

Consciousness might “imply” having an experience? But so what?  We still don’t know what consciousness itself is (and if we go any distance down this track we will also need a definition of that elusive term “experience” – one that doesn’t smuggle in the notion of consciousness – which would just make us go around in a circle.* )

Re: Could you explain to me what part of the question you did not understand, or thought was ambiguous? 

I’ve answered that question now, haven’t I?

DA

* This by the way is one of the flaws in Chalmers’ definition of the so-called “hard” problem. He falls back on the term experience (even italicises it to stress its importance) but provides no definition of it. But quite clearly the notion of experience could imply consciousness (unless somehow it is excluded – and how?). So, in effect, he is potentially defining consciousness (or the “hard” part of it) in terms of itself. Not a good look…


2016-11-20
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,It is important to make a distinction between a static view of 'physical features', which is often used in classical aggregate physics, and an analysis purely in terms of dynamic relation. What I think is interesting here is that there are reasons for thinking one can infer what the quantum level relation would be before being able to work out how that would equate to a traditional description in terms of membranes and molecules. I am not sure that one description 'reduces' to another either way. 'Reduction' is a philosophers concept that scientists tend not to have any use for. We normally describe at the level that tis most appropriate to the grain of dynamics of interest. What modern field theory has shown is that the lowest level explanation in terms of size does not 'explain' the higher level explanations any more than the other way around. Philosophers are about forty years out of date on this.

But in general terms I would agree that the field/mode relation that I think we need to postulate will involve post synaptic electrical potentials, as well recognised in neurophysiology, generated across the dendrite membrane and perhaps an acoustic mode. Note that acoustic modes do not take much interest in what molecules are involved. A bell will chime whatever it is made of if the material is structurally ordered in the right way. It is tempting to say the mode would be in the membrane or in the microtubules but in fact it will be in the structure, just as the sound of a violin is not in the string or the soundbox but in the violin.

To get to what I think you are puzzling about: The function of each bank of cells, whether face recognition or memory tagging or whatever need have nothing to do with the form of the field/mode relation. The function of each cell bank will largely be determined by where that bank is in the brain architecture. To use an analogy, within a car factory you have workers assembling bodywork, spray painting, engine installing etc but when they get to the canteen they all look the same - people. Having said that I strongly suspect that cells in different banks are structured rather differently in detail, to suit the computation they instantiate. In fact we know there is a vast range of structural variation between neuron groups. 

The problem at the moment is that we have very little idea what each neuron does computationally. That is not to say we know nothing. Hubel and Wiesel showed that there are cells that have an input of signals indicating changes in light intensity at certain points and the cell fires if it infers that these points form a line at a particular angle. 


2016-11-21
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Nagel is not trying to explain the nature of consciousness. Just pointing out something about it. That if a form was at any point of time conscious, then it would be like something (the description) to be it. I think it is useful, but I was just pointing out that if you interpret it that way, then perhaps you can see why I did not think your proof worked.

Last post, in response to where I had written: 

Could you explain to me what part of the question you did not understand, or thought was ambiguous? 
You wrote:
I’ve answered that question now, haven’t I?

No you have not. The question I was asking you about was the one above where I wrote that. I will write it again:

Do you know of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics (the features being discussed in physics being whatever you consider the terms in the physics equations to refer to)? 

Could you explain to me what part of the question (quoted directly above) you did not understand, or thought was ambiguous?  

Yours sincerely, 

Glenn



2016-11-21
RoboMary in free fall
Hi Jo,
I do not think that reduction is no longer an important concept, even in science. You write:

What modern field theory has shown is that the lowest level explanation in terms of size does not 'explain' the higher level explanations any more than the other way around. Philosophers are about forty years out of date on this.
I do not think the stereotype about philosophers is particularly useful. It would be like a philosopher suggesting that scientists thoughts about reality are half-baked. About 40 years ago P.W.Anderson pointed that out in the article "More is Different" that reductionism does not mean that "constructivism" is practical because of issues such as scale and complexity, and I doubt many disagree. But Anderson was not claiming that reductionism was not useful as you seem to be suggesting. A phenomena might not have been predictable given the fundamental laws of physics, but I suspect a physicist would expect it to be explainable in terms of them. So I am not sure how many scientists (especially physicists) or philosophers would agree with you. Consider for example this quote from The Grand Design (2010) by Hawking and Mlodinow (p.206):

But in 1998 observations of very distant supernovas revealed that the universe is expanding at an accelerating rate, and effect that is not possible without some kind of repulsive force acting through space...  Physicists have created arguments explaining how it might arise due to quantum mechanical effects, but the value they calculate is about 120 orders of magnitude (a 1 followed by 120 zeros) stronger than the actual value, obtained through supernova observations. That means that either the reasoning employed was wrong or else some other effect exists that miraculously cancels all but an unimaginably tiny fraction of the number calculated. The one thing that is certain is that if the value of the cosmological constant were much larger than it is, our universe would have blown itself apart before galaxies could form and - once again - life as we know it would be impossible. 

So there is an example of physicists checking whether phenomena can be explained by reduction. Anyway I do not think this is too relevant to the discussion we are having, but was just mentioning it given what you had written.

Regarding your theory, you wrote:

The function of each bank of cells, whether face recognition or memory tagging or whatever need have nothing to do with the form of the field/mode relation. The function of each cell bank will largely be determined by where that bank is in the brain architecture.... Having said that I strongly suspect that cells in different banks are structured rather differently in detail, to suit the computation they instantiate. In fact we know there is a vast range of structural variation between neuron groups. 


While that might give clues to your answers to the question that I had given you:

1) Are you suggesting that there is a set of physical features that will be the same in each of the neurons in the functional bank of neurons, and that it is the dynamic relation's features that those physical features reduce to that determines the conscious experience to be the same for each of the neurons in the functional bank of neurons? 

2) If the answer to (1) is in the affirmative then what type of physical features were thinking of that would be the same in each of the neurons in the functional bank? 

(I had mentioned a few candidate features in my previous questions, but I do not know whether those were the ones you thought relevant)  

3) Are you suggesting that there will be a range of physical features that the individual neuron will have that would allow the function the neuron plays to be identified? 


It does not really answer them clearly. For example was the answer to (1) in the affirmative? What was the answer to (2)? What about the answer to (3)? Could you perhaps clear those questions up for me, because while I could guess at the answers (my guess was that the answer to (1) was "no", and the answer to (3) was "no" also), if I was to do so, I could be accused of "putting words into your mouth" so to speak. You wrote:

To get to what I think you are puzzling about: 
I am not sure whether I should take that as a hint that you did not think the questions were clear, and a reason for why you did not directly answer them. If so then perhaps we could speed things up a bit, by you pointing it out when you are not clear what I mean and explaining your confusion. 

Yours sincerely, 

Glenn 


2016-11-21
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

Re: “Nagel was just pointing out …that if a form was at any point of time conscious, then it would be like something (the description) to be it”

OK. So what exactly would that “something” be? What would it be “like” to be conscious?

NB. There is a rather big snag here: Nagel himself said that he did not intend his word “like” to be taken in the comparative sense - i.e. as in “similar to”. So you'd need to find another meaning for it here.

And even if you ignore Nagel (many people seem to – although they still seem to think they are faithful disciples), it won't do to say that you don’t need to specify what the “something” is. That would reduce to saying that the distinctive feature of consciousness is simply that it is like something. But just about everything you can think of is like something, so that would tell us nothing.

Sorry there’s no way out of this. Nagel is dead in the water. Always was, always will be. It often surprises me that so few people realise it. Many philosophers are rather like sheep, it seems to me...

 RE: “Do you know of any features that human beings have which you cannot understand how they can be reduced to the features discussed in physics (the features being discussed in physics being whatever you consider the terms in the physics equations to refer to)? “

I have answered this. I said:

I’m not sure I understand the question. Are you asking if I think there are any features of the human body that cannot be explained in purely physical terms? It’s very possible human consciousness is not explainable in purely physical terms. But who knows? Since no one has even come close to giving a convincing explanation of what human consciousness is, and the relevant area of philosophy is bogged down in puerile ideas like “zombies” and brains in vats, the answer to your question is: so far, it’s anyone’s guess

DA 



2016-11-21
RoboMary in free fall
Reply to Glenn Spigel
Sorry Glenn, but I cannot answer your questions because hidden in the questions are assumptions about what the words mean that cannot be assumed because of issues of ascertainment etc that I have discussed at length. Natural science cannot be done using words in the way philosophers like to, as if they had fixed meanings in all contexts. I really don't know what 'reduced o'means. Scientists never use the term as far as I am aware. If reduction is converting one level of description to another level in physics that has nothing to do with 'explaining' in the sense that one might try to explain experience. I have never understood what people mean by experience being reducible to physics. And I cannot see what the quote from Hawking has t do with any of this to be honest.

2016-11-22
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
I gave you how I thought Nagel was meaning it, and in that interpretation the "something" was the description. But you mention that Nagel stated he did not mean "like" in a comparative way. Do you mind letting me know where he stated that? Because if he didn't then I can use the phrase it being like something to be conscious with a different meaning and claim that meaning as my own.

Regarding the question, you have not answered it, you wrote that you were not sure of the question, and then went on to question what I might meant, but as I have written now numerous times to you, I do not want to go into a long discussion about what you mean by "purely physical terms". What I asked you last post was:

Could you explain to me what part of the question (quoted directly above) you did not understand, or thought was ambiguous? 


Obviously now the question is no longer quoted directly above, but you quoted it in your post, and so presumably have enough information to figure out what is being asked. It would be great if you could answer it this time, as this is the third post to you in a row where I have asked a similar question, and so far you have not answered. Giving me the answer to the question it was about, starting with: "I'm not sure I understand the question." is not an answer about what you are unsure of about it or what you find ambiguous about it. 

Best wishes, 

Glenn

2016-11-22
RoboMary in free fall
Hi Jo,

The quote from Hawkings was showing that scientists (physicists at least) still use the concept of reduction. It was an example of where physicists had attempted to explain a supernova observation by reduction to quantum mechanical effects. I only mentioned it because you had written:
'Reduction' is a philosophers concept that scientists tend not to have any use for. 

I am glad I had checked with you regarding your reply to the post with the questions I gave you, because otherwise I would not have understood you to have not understood them. I now understand you to be stating that you had not understood them, and so cannot answer them when I asked them again, and that the reason is that you cannot understand what I meant by "reduced to" (presumably not even enough to even guess). I assume the problem is in question (1):

1) Are you suggesting that there is a set of physical features that will be the same in each of the neurons in the functional bank of neurons, and that it is the dynamic relation's features that those physical features reduce to that determines the conscious experience to be the same for each of the neurons in the functional bank of neurons? 

You wrote:

If reduction is converting one level of description to another level in physics... 


By "reduce to" I meant the lower level description that results from reduction, given reduction to mean converting from a higher level of description to a lower level of description. You had mentioned in post http://philpapers.org/post/23766 that: 

...the physical or dynamic relation that would determine what a particular experience ‘is like’...
And so I was asking whether you were suggesting that the features of the dynamic relation(s) in each neuron that determine what a particular experience 'is like' would be the same in each of the neurons in the functional bank of neurons. Does that explain to you what I meant in question (1) (you could use it in place of question (1)) and are you now able to answer the questions?

Yours sincerely, 

Glenn


2016-11-22
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I have actually answered your question in detail on several occasions. But you have to take on board the caveats I have raised to any attempt to answer the question. You still seem to assume that an answer can be given without caveats. I don't think I can help further.

2016-11-22
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE:But you mention that Nagel stated he did not mean "like" in a comparative way. Do you mind letting me know where he stated that?”

In a footnote to his “bat” article.

RE the question you keep asking that I don’t follow, I can’t tell you what part I don’t follow; it’s the whole thing. I gather the response I gave you was not relevant so I can only suggest you try rephrasing your question to make it clearer. (I have a feeling it's not going to get us anywhere anyway, but by all means go ahead.)

DA


2016-11-23
RoboMary in free fall
Hi Jo, 
In post http://philpapers.org/post/23926 I acknowledged that you had written:

The function of each bank of cells, whether face recognition or memory tagging or whatever need have nothing to do with the form of the field/mode relation. The function of each cell bank will largely be determined by where that bank is in the brain architecture.... Having said that I strongly suspect that cells in different banks are structured rather differently in detail, to suit the computation they instantiate. In fact we know there is a vast range of structural variation between neuron groups. 


But that is not a clear answer. It did not address the questions (1) (2) and (3) individually, instead seemed a comment from which I should guess your answer. It seemed that you were answering "no" to question (1), 

1) Are you suggesting that there is a set of physical features that will be the same in each of the neurons in the functional bank of neurons, and that it is the dynamic relation's features that those physical features reduce to that determines the conscious experience to be the same for each of the neurons in the functional bank of neurons? 

and as I mentioned, that is what I guessed you were stating. Though blended in was what seemed like an answer to question (3). The part I am referring to is where you stated that the cells in different banks are structured rather differently in detail, to suit the computation they instantiate. You stated that you know there is a vast range of structural variation between neuron groups. But what does that mean? Does it mean that the answer to (3):

3) Are you suggesting that there will be a range of physical features that the individual neuron will have that would allow the function the neuron plays to be identified? 


would be "yes"

or 

does it mean that there are different types of neurons in the brain, and different neuron groups have different proportions of those neurons, but the groups that you had mentioned (in post  http://philpapers.org/post/23766

When I was talking about a bank of cells with similar function I was intending to imply a bank of cells that, for instance, might have the job of recognising faces, or setting up short term memories, or integrating object concepts with raw feature data or a whole lot of other jobs involved in the thinking process.


do not have any neurons only found in those groups. But they have different proportions depending on the structure. If the latter, then the answer to (3) would be "no"

or 

does it mean that some groups in which you expect the neurons to consciously experience have specialised neurons, but other groups do not. So that they might have specialised neurons particular to a certain group, but it is not essential in your account, as there would be groups which you would be distinguishing between in terms of what they consciously experience, which do not contain any specialised neurons. 

So where you have chosen for some reason to not answer each individual question, with an either "yes" or "no" answer, there seems to be some ambiguity. Which question did you think there was a caveat with?

Yours sincerely, 

Glenn 


2016-11-23
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
I did not notice the footnote in the "bat" article. He had written it before footnote 2, and it was not mentioned in footnote 1. Do you have the number by any chance?
(I just googled the article and was looking at the pdf I found on http://organizations.utep.edu/portals/1475/nagel_bat.pdf )

As for the question that you are having a problem understanding, I will break it down into smaller questions to help me find what part you are having a problem with.

1) Do you understand what reductionism means?

2) Can you understand the following example: the heat of a gas is reduced to nothing but the average kinetic energy of its molecules in motion.

3) Do you think that there are fields and/or fundamental particles in physics?

4) Do you think that the fields and/or fundamental particles in physics have features which can be referred to in physics equations?

5) Do you think that humans beings have any features which can be reduced to fields and/or fundamental particles in physics and/or their features which are mentioned in physics equations?

6) Do you think that humans beings have any features which cannot be reduced to fields and/or fundamental particles in physics and/or their features which are mentioned in physics equations?

Yours sincerely, 

Glenn

2016-11-23
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

I don’t have a copy of the Nagel article. It’s a tedious, rambling thing and not the kind I keep around. But the footnote is there.

Your “breakdown” of the question has only added layers of new problems. E.g you seem to be simply assuming that this physics stuff (“fundamental particles” etc) will tell us something fundamental about the nature of human consciousness. I’ve already pointed out to you (twice) that I think that is a very questionable assumption. Many would agree with me. You would need to get over that hurdle before you pursue the kind of line you are taking. (I won’t hold my breath).

By the way, you still haven’t answered the points I made to you re the Nagel proposition (including: “it won't do to say that you don’t need to specify what the “something” is. That would reduce to saying that the distinctive feature of consciousness is simply that it is like something. But just about everything you can think of is like something, so that would tell us nothing.)

DA.


2016-11-24
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,It seems perhaps that your central concern is whether 'two conscious experiences are the same'. As I have indicated there are several reasons why there may be no absolute fact of the matter. Similarly, there may be no absolute fact of the matter about which experiences in our heads we are 'referring to' when we talk about 'our experiences'. In science we recognise that there are often ascertainment problems, sometimes in principle, that may prevent an definitive assessment about whether proposition is true. Philosophers like to think propositions are either true or false but in the real world of science this is nonsense.

Every cell is a bit different from every other cell but also a bit similar. We cannot know if what it is like for one cell mode is 'the same' as what it is like for another, since there is no possible means of verification, so we have to judge the similarity of experiences on their content as defined by their extensions, hoping that this might reliably reflect their intensions and some interesting 'similarity'. But that does not get us to sameness. 

2016-11-24
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

I think it might be useful for you to read it again. Though it was a while since I had read it, on re-reading I can see that Nagel also uses reduction to illustrate his point. And from the article it is clear that the interpretation I used was correct. He does make the point about reduction

We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. 
The "something" is the description, as I have repeatedly told you. You can read in post http://philpapers.org/post/23998 me informing you of that again. The footnote I think you are were mentioning is footnote 6

Therefore the analogical form of the English expression "what it is like" is misleading. It does not mean "what (in our experience) it resembles," but rather "how it is for the subject himself".
But that is not a declaration that Nagel did not mean "like" in a comparative way like you suggested. If you read where the reference to the footnote is made:

And if there is conscious life elsewhere in the universe, it is likely that some of it will not be describable even in the most general experiential terms available to us.
You can perhaps understand that it is simply that it is not necessary for us to understand the description (the "something" the conscious experience is like), because it is not necessary that the experience resembles ours. This is confirmed when he later writes:

I am not adverting here to the alleged privacy of experience of its possessor. The point of view in question is not one accessible only to a single individual....There is a sense in which phenomenological facts are perfectly objective: one person can know or say of another what the quality of the other's experience is. They are subjective, however, in the sense that even this objective ascription of experience is possible only for someone sufficiently similar to the object of ascription to be able to adopt his point of view - to understand the ascription in the first person as well as in the third so to speak. 
Also just for reference, you had stated:

Nagel’s proposition – or at least the proposition allegedly based on his article – is this specifically: “There is something it is like to be conscious”. 

What Nagel actually stated was:

But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.

and shortly after:

But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like for the organism.

Could you answer the questions that I gave you by the way, because I am still not clear on what part of the question I previously gave you, and you repeatedly did not answer, that you thought was ambiguous or not clear. 

Yours sincerely,

Glenn


2016-11-24
RoboMary in free fall
Hi Jo, 

I am currently just trying to understand more fully what your theory is. I am not concerned whether if after scrutiny it seemed plausible, how we would be able to establish it as a matter of fact. I am just checking it for plausibility.

As I understand your theory, what I am consciously experiencing, is what-it-is-like to be a single neuron somewhere in the brain of the human form that I consciously experience having and seemingly controlling. So in a sense I am a neuron, but the human form that I experience having has multiple neurons many of which are consciously experiencing and many of which have the experience of seeming to control the human such that the human is discussing what they individually are consciously experiencing.

I had asked you earlier:

Regarding your single neuron idea do all brain cells exist the whole life of the human, or does the one that we are experiencing being in your theory? If not then are you are of the mind that people's life expectancy might not be as long as they assumed (as they are a single neuron, and it could quite easily not last as long as the human)?  And if so, have you any idea on the likelihood of your neuron lasting until the human dies?

and you had responded:

Your questions about individual neurons are very familiar to me - the ones most people pose. But they are based on not actually reading what I wrote. I said nothing about a special neuron being me. I said that millions of neurons probably each experience being me. I have no reason to think there is one Jo Edwards subject or one Glenn Spigel subject. I used to assume that because I was brought up in a culture that assumed it. However it was not assumed in intellectual circles in the mid nineteenth century, not by Elizabeth Anscombe and it has been based on evidence or reason.

It does not matter how long neurons last. Most brain neurons present at age five probably last a lifetime but it does not matter. All that matters is that at any one time there are a few neurons around to be subjects. As John Locke pointed out there is no need for any enduring self. We have no reason to believe that a human subject that experiences being Glenn Spigel today is the same as one that had a similar experience five years ago. Many of the molecules involved will have changed, as will the structure of the organelles. We cannot pin our identity on continuity of matter. There may be dynamic entities in cells (like Bose modes) that last decades but we have no need to require that.
I think there when you use the term "me" you are using it in relation to the human that you the subject (a neuron) experience being. Here I am intending to use "me" to refer to the subject that is consciously experiencing what I am consciously experiencing (a neuron) in your theory. You mention that other neurons (in whatever neural bank I am located in) would have the same content, but what it might be like for them could be different, I am assuming this is because internally they would be representing the content differently, and so their different internal dynamic relations would determine the conscious experience differently. Though I appreciate you are suggesting that there are lots of us (subjects/neurons) experiencing it as though we were individually controlling the human and that it (the human) was referring to each of us individually, even though the human would not be in your theory. I just thought I would mention that to make it clear that I understand that in your theory the human could not attempt to referring to what I am consciously experiencing as a subject, instead it will just be referring to the content that is common to all us neuron subjects that influence its behaviour.
As I referred to above, you have mentioned that the neurons in whatever neural bank I am located in would share the same experiential content as me, even though what it is like for them could be different. But does that mean that other neurons in different neural banks have no content that I do not? The reason I ask, is that the human never seems to discuss such content.

Regarding the neuron that I experience being, I would have thought it would matter to me (the neuron) if it died, because if your theory was correct (I assume it is a physicalist one) then if the neuron I experience being died I would no longer consciously experience. Sure the human would be other neurons experiencing but my life (as a neuron) would be over. Consider the neuron-in-a-vat, there could be hundreds of individual neurons in different vats, but, if you were one of them, your subjective experience would presumably be dependent on which one you were experiencing being, and independent of what happens to the others. Sure in the human what happens to other neurons could affect inputs to my synapses, but my continued experience depends on my neuron continuing to live would it not, the same as with the neuron-in-the-vat would it not?

With regards to the content, consider a neuron-in-a-vat, could it experience the same content as what I am experiencing, and if so, what kind of difference in internal dynamic relation were you thinking would determine some content to be visual and some auditory? I have read you link the content to function, but in such a case I am not sure what function you would be considering the neuron to be performing. Would it be determined by its type, or the number of synapses, or perhaps which synapses were fired, or something else? 

Yours sincerely,

Glenn



2016-11-24
RoboMary in free fall
Reply to Glenn Spigel
If an experiencing neuron that has a sense offing Glenn Spigel dies then presumably there are no more experiences attributable to that neuron. But nobody would know because the only knowers would be other neurons in with er the same or other heads. Any test of any theory now will provide evidence for neurons that are still experiencing. The inability of a dead neuron to get an experience will not figure as anything being missing.

2016-11-24
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “I think it might be useful for you to read it [Nagel] again. “

I’d rather a trip to the dentist.

RE ”The "something" is the description, as I have repeatedly told you”

And as I have repeatedly asked you: What is the description? It has to be something specific otherwise you’re just left with “it is like something” – and everything is like something (as I’ve explained).

Part of me, I confess, feels quite sorry for people like yourself who try to defend the Nagel mantra. It is quite indefensible, yet still they try. Somewhere along the line, they must have been told it is important, and they have dutifully believed.

RE: ”Also just for reference, you had stated:

Nagel’s proposition – or at least the proposition allegedly based on his article – is this specifically: “There is something it is like to be conscious”. 

What Nagel actually stated was:

But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.

So what’s your point? The Nagel proposition – the one I quote and that is repeatedly trotted out – is what Nagel says, as your quote shows.

RE: ”Could you answer the questions that I gave you by the way, because I am still not clear on what part of the question I previously gave you, and you repeatedly did not answer, that you thought was ambiguous or not clear. 

As I said, you would first have to convince me that the all the physics palaver in your question tells us something important about human consciousness. As I also said, I won’t hold my breath.  If you want to go on about particles etc, Jonathan is your man, not me. I notice also that someone has started a thread about a wonderful, brand new “fundamental type of event related to mind/sentience/consciousness” that he calls a “psychule”. Might that be to your taste? It might turn out to be a close cousin of the "neuron-in-a-vat" you mention to Jonathan.

DA


2016-11-25
RoboMary in free fall
Hi Jo, 
But I, the subject that is having the conscious experience that I am having would notice, as presumably I would if the neuron got damaged somehow. 

From the last post, would you mind answering the following:

With regards to the content, consider a neuron-in-a-vat, could it experience the same content as what I am experiencing, and if so, what kind of difference in internal dynamic relation were you thinking would determine some content to be visual and some auditory? I have read you link the content to function, but in such a case I am not sure what function you would be considering the neuron to be performing. Would it be determined by its type, or the number of synapses, or perhaps which synapses were fired, or something else? 


Yours sincerely, 

Glenn

2016-11-25
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
I have explained that the description would be a description of what you are consciously experiencing at a given point in time. So at the moment I am consciously experiencing typing a reply to you on a computer. That is (partially) what it is like to be me at the moment. The description will change, as later I expect I'll be consciously experiencing having a cup of tea. 

The questions were just to help you understand what people meant by consciously experiencing. You were claiming that you could not understand the question previously. So I broke it down for you to find which part you were having a problem with. Now you seem like you just do not want to answer. If you prefer I can cease the conversation, I am not going to try to force you to publically admit that you understand. 

Yours sincerely, 

Glenn

2016-11-25
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,But the subject in the recently defunct neuron that sensed that it was I (for those that still have this sense) will no longer have any sense of anything, so no sense of any lack of anything. Whatever subjects are now having the sense of being Glenn are clearly not those that died off a minute or two ago.

I think a neuron in a vat, suitably connected up, can be hypothesised to have an experience that is no meaningful sense different from that which a neuron has in your head at present. That would include the same content as defined extensionally and as far as possible intensionally as well. It might be very hard to ascertain this but it would be the working hypothesis that one would hope to survive any possible test.

As to what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials. I am wary of calling content 'visual' or 'auditory' in this context because those terms refer to earlier sensory input pathways and in a vat you would not need to have separate pathways. The brain scientist studying the cell could connect up a programme to the cell that input signals giving colour and sound experience from the same electronic set up.

There is also an important pitfall here. For a cell capable of sustaining experiences of both colours and sounds there should be a set of rules for which input patterns were perceived as colours and which as sounds. However, there is no reason why another cell, with a slightly different computational role in a brain might use the same patterns for colour that the first used for sound or vice versa. A cell within the auditory signal collating system would use all patterns it had available for dealing with sounds, so it could make use of patterns that a multimodal cell might allocate to colour. There is no need for the same set of rules across all cells.

Taking that into consideration it is clearly very difficult to set up any very specific hypothesis about which patterns of post synaptic potentials are used in neurons for experiences of a particular extensional content. But I think we have to try to explain qualia with patterns of interrelation within individual integration events in individual cells. If you try to explain qualia by patterns of action potentials distributed across the brain you have absolutely no reason to think they would 'feel' any different from each other because they have no immediate causal relation that might encode different feels.

2016-11-26
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “I have explained that the description would be a description of what you are consciously experiencing at a given point in time. So at the moment I am consciously experiencing typing a reply to you on a computer. etc.."

But this is simply saying: I am aware of/conscious of/experiencing/etc  X Y or Z. So, what? We all know we are aware of/conscious of various things at various times. That tells us precisely zero about what human consciousness is.

Good grief! If it only were that easy!

RE: “Now you seem like you just do not want to answer. If you prefer I can cease the conversation, I am not going to try to force you to publically admit that you understand. “

This a mite disingenuous of you, Glenn. What I said was: “You would first have to convince me that the all the physics palaver in your question tells us something important about human consciousness”. You have not done that or even attempted to. As I have explained to you about 3 times, I question the basic assumption of your question (as many others would). To use a bit of your favourite phraseology: do you understand that?  

I often have the feeling that you don’t really understand the nature of the philosophical debates surrounding the notion of consciousness. Whether or not it can be explained in purely physical terms is a basic part of the controversy, and there are strong views on both sides. You (like Jonathan) like to pretend that debate doesn’t exist and that everyone thinks as you do. You’re both living in a bubble as far as that is concerned, but out of kind of list courtesy, I often refrain from mentioning it. The tone of your last post compels me to be a little less generous.

DA


2016-11-27
RoboMary in free fall
Hi Jo, 

If one or more of the cells that the synapses that I (one of the neuron subjects in your theory) connected to died, then presumably that would make a difference to the internal dynamic relations in the cell (that you claim I as a subject am) would be different from how they would have been if those neighbouring cells had not died. And if those internal dynamic relations determine what it is like, then where previously they were such that it seemed as if I were in control of the human and had free will, after the change presumably that illusion would be broken, as it would be clear that the human was never discussing what it was like to be me (one of the neuron subjects) as it would not mention the change. Do you agree?

You mention:

...what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials


and also that 

For a cell capable of sustaining experiences of both colours and sounds there should be a set of rules for which input patterns were perceived as colours and which as sounds. However, there is no reason why another cell, with a slightly different computational role in a brain might use the same patterns for colour that the first used for sound or vice versa.

But I am not clear what you mean by a computational role (I would previously have considered it to involve where it is located withing the network of neurons, but you seemed to suggest that the neuron-in-a-vat had a computational role). As I understand it, there are about 200 different types of neurons in the brain. Where you thinking that the computational role was determined by neuron type, or perhaps the interrelationships between the spacial location of the relevant post synaptic potentials, or some other intra-type variation?

Yours sincerely,

Glenn


2016-11-27
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
Nagel was not discussing the nature of consciously experiencing, he was just referring to the feature that we could look into the nature of. You do not seem to have understood his paper, and I think this was shown in your attempt at a proof that what he was saying did not make sense, as it seemed to me all you demonstrated was that you had not understood the point he was making.

You claim that you do not understand which feature is being investigated and then when I go to help and get to a crucial point, you claim not to understand the question, and when broken down you suddenly refuse to answer unless I convince you that it is worth understanding the feature that you claim not to understand. I think you should first understand what feature is being referred to, before trying to understand its importance. It is like you do not want to be seen to understand, and earlier almost seemed to take pride in your failure to understand other people who have tried to help you with the matter.

You mentioned that you have the feeling about me that I 

...don’t really understand the nature of the philosophical debates surrounding the notion of consciousness. Whether or not it can be explained in purely physical terms is a basic part of the controversy, and there are strong views on both sides. 

I understand the nature of the debates (and what is meant by purely physical terms has been an issue in such debates that you seem not to have not appreciated). I am just trying to explain to you what feature of a human being that they are debating regarding whether it can be explained in purely physical terms or not. By your own admission you have not been able to follow most of the debates, as you had not been able to figure out or guess what feature they were debating about. Which is why I am trying to help you out, and it would help me help you if you answered those questions. It seems strange to me that you seem keen not to answer them. I do hope you have not been faking an inability to understand the philosophers, and wasting my time.

Yours sincerely,

Glenn

2016-11-27
RoboMary in free fall
Reply to Glenn Spigel

RE: “Nagel was not discussing the nature of consciously experiencing, …

So when Nagel said “There is something it is like to be conscious” you don’t think he was attempting to say something about the nature of consciousness? (or “consciously experiencing” as you rather oddly put it).

Most people think he was, but perhaps after all he was really talking about the weather or his golf scores, do you think?

DA


2016-11-27
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
Your first paragraph seems a bit jumped but we have quite a lot of information on what happens when cells die - from stroke, retinal disease and Alzheimer's etc. For those cellular subjects left behind the death of another cell will have different effects depending on where it is in the system. If you lose a lot of retinal cells you sense a 'sort of grey blob' to quote my mother. That seems to be because pathways further up prompt early sensory path cells to feed signals into attention and if there are none the system registers the absence. If cells involved in remembering names die you realise you cannot remember a name. However, a bit further up the sensory system when cells die you are not aware that anything is missing. You get agnosia. The remaining experiencing cells are not aware that there is anything that is not there that used to be. A cell cannot have a sense that 'I am not seeing red the way I used to if input from memories have to go via the same cells that signal red in the present, which we thinks the case. Throughout life what it is like for a cell to sense red may change constantly but as long as remembered red comes through the same channel as present red nothing odd would be sensed.

I am not sure about the issue of free will but we know that all sorts of shifts in belief can occur when some cells die. But then of course we know that beliefs change as synapses shift in strength throughout life with learning quite apart from any cells dying. Moving to a realisation that there is no single 'I' in a head can occur without any cells dying! It occurred to Elizabeth Anscombe, as it has to me and Steve Sevush.

Computational role will depend on how the dynamics of the cell fit in to the totality of computation of the system. Think of the play Hamlet and the role of Polonius. Polonius can be played by many actors but they have to speak the rights lines at the right time. However, in a film of Hamlet, like a cell in a vat, the role of Polonius could be filmed in Italy in 2013 because the actor lived there and the role of Hamlet in New York in 2014 because that was when he had free time. As long as there were no shots of the two together or you photoshopped them in the actor could still play the role of Polonius in the film. So computational role relates to local structure and dynamics and also to distant dynamics in a way that you can jig around all sorts of ways. For the cell in the vat what matters for it to play a role of face recognition is that signals of a sort relevant to face recognition are set in and that the cell integrates over the signals in a way that is appropriate to the task. There is no simple answer to any of these computational questions. brain computation is hideously complicated and it is no surprise that philosophers like Fodor could not find single sentence definitions for meaning or reference or whatever. 

2016-11-28
RoboMary in free fall
Reply to Derek Allan
Hi Derek,

You wrote:
So when Nagel said “There is something it is like to be conscious” you don’t think he was attempting to say something about the nature of consciousness? (or “consciously experiencing” as you rather oddly put it).

No I don't, as I wrote I think he was pointing out which feature he and many other philosophers were referring to by conscious experience. You wrote:

Most people think he was, but perhaps after all he was really talking about the weather or his golf scores, do you think?

I think you are being on purposely slow here. I clearly do not think he was discussing the weather or golf scores because I have already indicated that I think he was pointing out the feature or reality referred to by consciously experiencing. I think that is why he is quoted so often, because that identification can be used by philosophers who have vastly different theories about the nature of that feature because it is a nature neutral identification and thus can be widely used (it can be used by an idealist, a dualist, or a physicalist). As for your understanding that most people think he was referring to the nature of consciousness (as though you had done a survey):

1) Can you give me a few (or even one other than yourself) examples of where a philosopher indicated that they thought he was writing about the nature of consciousness?

2) What did you think he was writing about the nature of consciousness? 

Yours sincerely,

Glenn

2016-11-28
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

So, in two consecutive posts you have said:

Nagel was not discussing the nature of consciously experiencing

And then: I think he was pointing out which feature he and many other philosophers were referring to by conscious experience”

A blatant self-contradiction, I fear. Could you explain to me what you are “referring to” when you use the phrase “conscious experience” without “discussing the nature of consciously experiencing"? I await your attempt with bated breath.

Re: “What did you think he [Nagel] was writing about the nature of consciousness? 

I think he was writing nonsense, as I have said. (Remember my proof?)

DA


2016-11-28
RoboMary in free fall
Hi Jo, 

I think it is important to make a distinction to what happens to a neural system when cells die, and what will happen to the content experience of a single neuron when a proportion of its neighbours die. I was asking (though admittedly not clearly) whether you agreed that the content experience of a single neuron that had neighbouring cells dying could become out of synch with the "content of experience" that the human was discussing. 

[The reason I put the "content of experience" in inverted commas is that it seems to me that in your theory there will be content of experience of each neuron, but that is distinct from what the human is discussing, as it is not discussing the conscious experience of any neuron and no multi-neuronal subsection of the brain is subjectively experienced)] 

You stated:

Computational role will depend on how the dynamics of the cell fit in to the totality of computation of the system.
And then gave an analogy about a play and actors which I did not understand. The reason being, as long as the "topology" of the play was the same then like you state the spatial orientation does not matter. But with the neural network you have stated in post  http://philpapers.org/post/24146 that 

As to what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials. 
Making it sound as though you were not just suggesting that the topology was important for the content, but also spatial locations. So that a different neuron, even if having the same amount of inputs would not have the same content unless it also had similar interrelationships between the spatial location of the post synaptic potentials. So I did not understand the analogy. You also wrote in your last post to me that:

So computational role relates to local structure and dynamics and also to distant dynamics...
But I am not clear what difference you were thinking the distant dynamics makes, because, as I understood you, a neuron in the brain experiencing some content would experience the same content in a vat if the inputs were the same. Which I thought was indicating that the distant dynamics tended not to make a difference (thus was not clear on the significance of the distant dynamics to the computational role of the neuron in the brain, and in a vat)

You write:

For the cell in the vat what matters for it to play a role of face recognition is that signals of a sort relevant to face recognition are set in and that the cell integrates over the signals in a way that is appropriate to the task.


But I am again not clear on what you mean. The reason is that I do not understand what difference the "interrelationships between the spatial location of the relevant post synaptic potentials" makes to the task. That is because I thought the idea was that it was the totality of positive ions that were let in that was significant in a neuron firing, and so as long as the synapses were "correctly weighted" as to how many positive ions they each let in, then the neuron would fire appropriately to allow the system to deliver the evolved response. Thus I did not understand the evolutionary pressure for the "interrelationships between the spatial location of the relevant post synaptic potentials". Sorry if I am being a bit slow here, but I do not know much about it. I hope you do not mind explaining your theory in a bit more depth.

As a side issue I do not think that "Fodor could not find single sentence definitions for meaning or reference or whatever" because "brain computation is hideously complicated". I think it is because there are no "single sentence definitions for meaning or reference or whatever" for any computation. Consider the computation performed by an AND gate. 

Yours sincerely,

Glenn





2016-11-28
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
You seem to be tying yourself knots because you are not quite following what I am saying. I think I have probably given as extensive an exposition of my approach as is appropriate. All my relevant thoughts are to be found in the posts above. But you need to read them carefully. Your last set of questions each includes a non-sequitur step from what I have said. I am happy for you to draw your own conclusions from my proposals since you can do that as much as I. 

Best wishes

Jo

2016-11-28
RoboMary in free fall
Reply to Glenn Spigel

 

My 15-minute talk youtube video about "Detecing Qualia" (https://www.youtube.com/watch?v=AHuqZKxtOf4) was too much for Derek.

 

I've reduced these ideas down to a 500-word abstract (as required), and plan on submitting this to the 2017 Science of Consciousness Conference. (https://eagle.sbs.arizona.edu/sc/index.php)  Not sure if it will help here, but I believe I have reduced these qualitative ideas down to an absurdly simple and obvious way to state the problem of ineffable qualia and how to get around this problem.

 

https://docs.google.com/document/d/1HZDHdTxkt9kIMQovKs_x1B4H9-R0H0SYwQUoB0gh_IE/edit?usp=sharing

 

I'd love to have your thoughts, and to know if this helps anyone at all.

 


2016-11-28
RoboMary in free fall
Reply to Brent Allsop
Not just "qualia" but inverted "qualia" !  Step right up folks!

DA

2016-11-29
RoboMary in free fall
Hi Jo, 
I thought I saw some knots appearing, but had assumed it was with your theory. As for your claim that:

Your last set of questions each includes a non-sequitur step from what I have said.
I did not ask any questions in the post. I started off pointing out a distinction that you seemed to have failed to recognise in your previous response. And then raised some points which I was confused about and explained why, and finally pointed out that there are no "single sentence definitions for meaning or reference or whatever" for any computation and that a lack of such definitions does not imply any complexity of computation.

Anyway if you had wished to continue, you could have clarified the points that I was unclear about regarding your theory, but you have chosen not to, so I assume you do not wish to continue the conversation.  

I do not think your theory works by the way. The main one is the symbolic relation issue, which I was going a long route around to get to again in a way that you could understand. Nevertheless I think there are multiple problems, a simple one has to do with your first sentence to me:

Dear Glen,I sympathise with your championing of 'there is something that it is like'. It is something that all professional neuropsychologists recognise as a reality.


But in your theory the human that typed that and the humans that are professional neuropsychologists are not acting in response to what-it-is-like. The subjects for which it is like something in your theory are neurons, which do not individually control the human, nor communicate what it is like to be them to any other subject. So they and the other humans would seem to be acting on zombie like motives, and what motive would there be to discuss what-it-is-like unless what-it-was-like was an influence on the human's behaviour. 

Anyway, thanks for the discussion, it was an interesting theory, yours sincerely, 

Glenn

2016-11-29
RoboMary in free fall
Reply to Brent Allsop
Hi Brent, 
I watched the video, but am not clear on how you are suggesting it works. For example you suggest that Glutamate is red knowledge. Are you suggesting that when seeing red, there are Glutamate trails across the optic nerve and through the main brain etc., or just at specific areas in the brain? Also what about thoughts, it seems strange to think that all thoughts are represented by differing molecule types, because not all thoughts are about molecules (but each molecule can be thought about). 

Yours sincerely, 

Glenn

P.S. 

Hi Derek, 

In case you are reading this, I thought I had responded to post  http://philpapers.org/post/24262 but I cannot see it. Because of the 2 post per day limit I'll respond to it here, and write again if you do not notice. You wrote:

So, in two consecutive posts you have said:
Nagel was not discussing the nature of consciously experiencing

And then: I think he was pointing out which feature he and many other philosophers were referring to by conscious experience”

A blatant self-contradiction, I fear. Could you explain to me what you are “referring to” when you use the phrase “conscious experience” without “discussing the nature of consciously experiencing"?

I was assuming a distinction between the mentioning of feature, and an explanation of the feature, such that you could reference a feature without having a view on its nature. So physicalists and dualists could both reference the same feature (what feature was being made clear by Nagel) but differ on opinion with regards to its nature (physicalists would consider it to have a physical nature and dualists perhaps a spiritual nature or a combination of physical spiritual etc.). With the distinction there is no contradiction.

Yours sincerely,

Glenn 



2016-11-29
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
I agree that we have probably explored each others ideas enough. The neurons do control behaviour. The fact that no single one does seems unimportant. 

You seem to suggest that 'what it is like' has some additional causal power beyond causal powers, which seems to me not to make sense. There may well be something that the causal powers that determine behaviour are like for the neurons but that does not add some extra power, by definition. It simply indicates that the powers that operate are 'like something' to the neurons. And since we can never ascertain that we can never have a theory that has to invoke what it is like. We just reckon it probably is. 

I could not get my head around any of this in my twenties, and forgot about it in my thirties. It was only in my fifties that I came to realise that causation is not straightforward. I had found that out in the lab and coming back to mind matters I realised that I had been 'too quick' as philosophers love to say.

Keep at it. It is a bone worth gnawing on.

2016-11-29
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “I was assuming a distinction between the mentioning of feature, and an explanation of the feature, such that you could reference a feature without having a view on its nature.

OK. So what is the "feature” in the Nagel proposition ?

(Asking you this question gives me a distinct feeling of déjà vu.)

DA


2016-11-29
RoboMary in free fall

RE: “The neurons do control behaviour.”

Judge to prisoner: You are a villain! I sentence you to ten years in prison.

Prisoner: Please, m’lud, it weren’t me! It was me neurons what did it. A sciencey chap told me that, so it must be true.

DA


2016-11-30
RoboMary in free fall
Hi Jo, 

You seem to be missing the point that in your theory the neurons does not communicate what-it-is-like to any other subject. So there are a series of little "black boxes" with regards to what-it-is-like none of them reporting that it is like something to be them, and with none of them having information about how they are arranged. Each only has synaptic input (which could be happening in a lab, so cannot even be counted as information that it has any neurons as neighbours) and none of them communicate information about their neighbours. So how can the human be reporting anything about "what-it-is-like"? It would seem to have no more reason to discuss the matter than a philosophical zombie.

Yours sincerely, 

Glenn



2016-11-30
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

Nagel indicates what feature of reality is being referenced by the term consciously experiencing. I am assuming no significant distinction between an organism having conscious experience and an organism consciously experiencing. 

Yours sincerely, 

Glenn



2016-11-30
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
Now you see to be revealing a complete lack of understanding of information Nobody has ever suggested that 'what it is like' is ever communicated from anything to anything, or that it needs to be. When you report seeing an orange to me I can guess what it is like for your subject(s) only because I have memories of what oranges are like to me. What it is like is never sent through the air in sound. That is not how meaning and language work. If I have never seen an orange and you tell me you saw one you communicate nothing about what oranges might be like to anyone. Surely that is the whole point of the Mary story that nobody could ever have told her what red was like. Why would one need cells inside a brain to tell each other what their red is like? The meaning of a signal in the brain to some subject is defined by the site and timing of the signal arrival, nothing else. That may surprise you but the whole of neuroscience has to be based on that assumption.

As I indicated way back 'reporting an experience' is a vernacular turn of phrase, rather as is 'what it is like', which does not actually mean what the component words would seem to add up to. We learn meanings by the complex use of deictic clues We learn that to kick the bucket means to die. There cannot be such a thing as reporting an experience in the literal sense we assume that the experience caused the report. To analyse experience either scientifically or philosophically one has to recognise that follower accounts are often hopelessly confused. That is the whole point of science and philosophy surely  to discover where the folklore is confused and replace it with something more consistent?

2016-11-30
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

RE: “Nagel indicates what feature of reality is being referenced by the term consciously experiencing.”

Actually, he doesn’t use the phrase “consciously experiencing”. That’s your odd terminology. The Nagel proposition is (I repeat): “There is something it is like to be conscious”. Exactly that. Not something bizarre variation of it.

Now, nowhere in that proposition do I see any “[indication of] what feature of reality is being referenced” (to quote your own words). Do you? Please tell me what it is.  

(I’m assuming that your sentence above doesn’t mean that “consciously experiencing” is the “feature of reality” you think is “indicated”. You’d then be saying that consciousness is the consciousness of consciousness. Which is gibberish. And although Nagel’s proposition is in fact nonsense, as I’ve shown, even he didn’t say anything quite as absurd as that.)

DA


2016-11-30
RoboMary in free fall
Reply to Derek Allan
Hi Derek,

Previously, we were talking about a person that had a red/green inversion system installed in his perception process, either in the retina of his eye or in the optic nerve, immediately after the retina.

You said: "They simply see different colours."

In your way of thinking, is there an observable difference in the operating brain (after the optic nerve) between these two people (one who is "seeing different colours") and one who is not?  If so, what is this difference?


2016-11-30
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn,

Thanks for reviewing the video and asking these questions.  When we perceive a strawberry, we are consciously aware of a 3D space containing a 3D model of the strawberry.  If you break the 3D space up into voxel elements (like 2D pixel elements only 3D) there is a set of voxel elements that is our conscious knowledge of the surface of the strawberry.  Each of those voxel elements on the surface of the strawberry can have a redness quality, or they can have a greenness quality if the input is red/green inverted.  The important point is that there is some neural correlate that is each of these 3D voxel elements of color of our conscious knowledge of the strawberry in our brain.  Like how RoboMarry can have abstracted knowledge (knowledge that does not have the same physical quality but must be qualitatively interpreted) about everything going on in each of these voxels and their colors, we don 't yet know how to qualitatively interpret this knowledge of what we are observing in the brain.  So we think these neural correlates are all just "grey matter".  A clear qualitative miss interpretation of what these voxel neural correlates are qualitatively like in our conscious experience.

Don 't get hung up on the exact physical makeup of the particular neural correlate of each of these voxel elements of knowledge - as it could be anything in the brain in any configuration.  I just happen to currently think Steven Lehar's theories about what they are is the most supported by the evidence to date and simplest to understand.  (see: http://cns-alumni.bu.edu/~slehar/Lehar.html )  He proposes an actual neuron representing each of the voxel elements.  One possibility is that when it is red that particular neuron's synapses are firing with glutamate, and when it is green that particular neuron is firing with glycine.  It might be the chemical reaction of glutamate in the synapse that may have the redness quality we experience for that voxel.  But this is just one physical theoretical possibility.  You can substitute glutamate with anything else in the brain.  Science will eventually tell us which physical stuff in our brain really has the redness and greenness qualities we can experience for any particular voxel in the visual field.


2016-11-30
RoboMary in free fall
Reply to Glenn Spigel

 

Derek pointed out exactly what the Nagal proposition is: "There is something it is like to be conscious"

 

The way people talk about it in these general, nonspecific ways is the biggest part of the problem.  First off, instead of talking about all of consciousness in this ambiguous and complex way, first just talk about two clearly and specifically different elemental qualities, like redness and greenness.  Then once you understand and can communicate the qualitative difference between those, simply apply the same qualitative logic to everything else.

 

Also, instead of just saying the much more specific "the redness I experience is 'like' something"  Say I have knowledge of red that has my redness quality.

 

Jo said:

 

"Nobody has ever suggested that 'what it is like' is ever communicated from anything to anything, or that it needs to be."

 

My paper is describing exactly how to do that.  Subtitled: "How to Eff the Ineffable".  (https://docs.google.com/document/d/1HZDHdTxkt9kIMQovKs_x1B4H9-R0H0SYwQUoB0gh_IE/edit?usp=sharing )  I want to know what your redness is like, and I need to be able to eff this ineffable quality to do that.

 

Joe said:

 

"When you report seeing an orange to me I can guess what it is like for your subject(s) only because I have memories of what oranges are like to me"

 

At least for me, the memory of orange is qualitatively different and less than the actual orange experience I have when I see an orange.

 

Jo also said:

 

"What it is like is never sent through the air in sound."

 

I disagree.  It can be said that a binary "1" is like my redness and a binary "0" is like my greenness.  Then once you have that translation specified, Even if the physical representation of "1" is air, sound, light, or whatever.  Even if these physical representations do not have the physical redness quality, as long as you know how to properly interpret these likenesses you can now, communicate, and simulate everything about my redness.  But only if you know how to qualitatively interpret that which is representing the "1".  RoboMary can be said to know everything about redness with abstracted knowledge that does not have a redness quality.  But until you provide the qualitative translation table, you can't know what RoboMary's abstracted knowledge of everything about redness represents.

 

Also, there are multiple levels of effing the ineffable, besides the weak one described above.  These various methods can be classified as week, stronger, and strongest.  (see my 15 minute video for a description of each).

 


2016-11-30
RoboMary in free fall
Reply to Brent Allsop

Hi Glenn

RE: “The way people talk about it in these general, nonspecific ways is the biggest part of the problem.  First off, instead of talking about all of consciousness in this ambiguous and complex way, first just talk about two clearly and specifically different elemental qualities, like redness and greenness. ..”

The slight problem is, redness and greenness (whatever they are…) are not the topic. Consciousness is.  Nagel didn’t say “There is something it is like to be red or green”. (Though it would have been about as sensible as what he did say…)

And if we can talk about redness and greenness, why not bigness and littleness, goodness and badness, things and nothingness… Gee! The world’s our oyster! Indeed, why not oysters and absence of oysters?

RE: In your way of thinking, is there an observable difference in the operating brain (after the optic nerve) between these two people (one who is "seeing different colours") and one who is not?  If so, what is this difference?”

I have no idea. But before I even bothered to think about it, you would need to tell me what this physical issue will tell us, one way or the other, about human consciousness. (Apart from the fact that we need to have a brain to be conscious. Given that there don't seem to be many conscious skulls around.)

DA



2016-12-02
RoboMary in free fall
Hi Jo, 

You wrote:

Now you see to be revealing a complete lack of understanding of information Nobody has ever suggested that 'what it is like' is ever communicated from anything to anything, or that it needs to be. When you report seeing an orange to me I can guess what it is like for your subject(s) only because I have memories of what oranges are like to me. What it is like is never sent through the air in sound. That is not how meaning and language work. If I have never seen an orange and you tell me you saw one you communicate nothing about what oranges might be like to anyone. Surely that is the whole point of the Mary story that nobody could ever have told her what red was like. Why would one need cells inside a brain to tell each other what their red is like?
I was never suggesting what it is like is ever sent through the air in sound. I was thinking that a description could be communicated and interpreted in a relevant way by something that understood the description. My point when I wrote::

...in your theory the neurons does [sic] not communicate what-it-is-like to any other subject. So there are a series of little "black boxes" with regards to what-it-is-like none of them reporting that it is like something to be them, and with none of them having information about how they are arranged. Each only has synaptic input (which could be happening in a lab, so cannot even be counted as information that it has any neurons as neighbours) and none of them communicate information about their neighbours. So how can the human be reporting anything about "what-it-is-like"? It would seem to have no more reason to discuss the matter than a philosophical zombie.
was that there was no information indicating that it is like something to be a neuron, or information that would form a description of what it was like communicated at all. To help you understand the point I was trying to make, I can give you another clue. Imagine an arrangement of NAND logic gates, each taking two binary inputs, input 1 and input 2, and giving an output of 1 if either input 1 or input 2 where not 1, else giving an output of 0. Imagine also that each of these logic gates records a history of what its inputs were (so input 1 being a 1 and input 2 being a 0 for example). Imagine also that there is a speaker system connected to arrangement receiving binary signals from it. and finally imagine that out from the speaker system came, in English, the statement that the NAND gates that made up the arrangement record the history of what their inputs are. It should be obvious to you, without the need to know any further details of the computation, that the fact that the NAND gates did record the history of what their inputs were did not play a part in the system outputs that that was the case. What makes it obvious is that no NAND gate provides information regarding whether that was or was not the case. Whether the NAND gates did internally record their history or not did not play a part in the system computation. So using that as a clue, can you manage to guess what I was thinking was an obvious problem with your theory? I assume that you will not be able to, and therefore not address the issue. 

You wrote:

The meaning of a signal in the brain to some subject is defined by the site and timing of the signal arrival, nothing else. That may surprise you but the whole of neuroscience has to be based on that assumption.

I feel a bit silly stating this, especially since you have expertise in the area, but I do not think the majority of neuroscientists would agree with you. I assume they would consider the brain to use multiple neurons arranged in such a way that computations are performed, and that the meaning of a signal would be considered to be what it represents in the computation, and that would take into account its place in the neural arrangement. Can you give an example of any other neuroscientist other then yourself ascribing meaning to a signal based purely on the site it arrived at on the neuron and its timing? If I misunderstood you, could you please let me know.

Yours sincerely, 

Glenn



2016-12-02
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 

You wrote:

RE: “Nagel indicates what feature of reality is being referenced by the term consciously experiencing.”

Actually, he doesn’t use the phrase “consciously experiencing”. That’s your odd terminology. The Nagel proposition is (I repeat): “There is something it is like to be conscious”. Exactly that. Not something bizarre variation of it.

The phrase "it is like" first appears in the third paragraph. The one that starts with the sentence "Conscious experience is a widespread phenomenon." It appears in the sentence:

But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.

As I mentioned in post http://philpapers.org/post/24438

I am assuming no significant distinction between an organism having conscious experience and an organism consciously experiencing. 

Also in post  http://philpapers.org/post/24482 you are responding to something Brett wrote, not what I wrote, even though you start off the reply with "Dear Glenn, ".

Yours sincerely, 

Glenn

P.S. 

Hi Brett, 

Hopefully you see this reply to your post http://philpapers.org/post/24418 . I do not think it is useful for you to stereotype what people think such as in comments such as:

So we think these neural correlates are all just "grey matter".  A clear qualitative miss interpretation of what these voxel neural correlates are qualitatively like in our conscious experience.

Especially when the people you are writing to have made no indication of any such misinterpretation. You also write:

Don 't get hung up on the exact physical makeup of the particular neural correlate of each of these voxel elements of knowledge - as it could be anything in the brain in any configuration.

I was not "hung up on the exact physical makeup" of your theorised neural correlates. I thought that was obvious from the point I had made but you ignored (in the sense of not addressing it) when I wrote in post http://philpapers.org/post/24402 :

Also what about thoughts, it seems strange to think that all thoughts are represented by differing molecule types, because not all thoughts are about molecules (but each molecule can be thought about). 
The point is not directed at any particular molecular makeup.  

Yours sincerely, 

Glenn





2016-12-02
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
You seem to making heavy weather of this.

It is indeed obvious in your NAND gate example that the operation of the gates plays no part in the meaning of the output being about them handling signals (You said recorded which seems the wrong word here but never mind.) It is precisely for this sort of reason that my model is constructed the way it is. I can understand why you might not understand that but I am not sure I can help if you cannot.

It relates to the business about meaning to something in the brain being determined by site and timing of arrival. I agree that most of the neuroscientists who think they are working on consciousness and experience do not actually understand that their entire field is predicated on this. Ramachandran is a good example. He produced a howler about a blind man's brain seeing red when connected by a wire or axon to a seeing man's brain. But if you talk to what I might call real neuroscientists, like Semir Zecki who is perhaps the most eminent vision physiologist alive and worked with Huxley of Hodgkin and Huxley, he would instantly agree with me. A message meaning something like 'there is red to the right of centre' will be carried by a neuron connected to rods to the left of the fovea. These neurons are otherwise exactly like any other sensory neuron in their signalling. So any component of the brain can only know that the signal is supposed to mean red right of centre by the fact that the signal is coming along that path - i.e. by the site where it arrives, since paths to sites of arrival are fixed in brains. There is no 'person' floating about like ectoplasm in the head that can sense an 'assembly' of neurons dancing some strange minuet that means red right of centre. I well remember Semir's account of how ridiculous this sort of idea is. He has a magical twinkle in his eye and he switched it on, accompanied by  raising of eyebrows when he said 'and they all believe it swirls about the cortex and you get IGNITION!!!' (referring I think to Dehaene or Tononi or someone). 

The problem we have in science at the moment is that the dumbed down popular version has more or less completely replaced the real version. The real version has a direct inheritance from Newton (the same dinner table at Trinity where Huxley was Master) and Descartes and his rigorous view of locality. The dumb view spends all its time saying Descartes was a mystic and making the error they accuse him of in a far worse form.

To understand all this you need to sit down and draw diagrams and see what could be possible. It is not easy because the diagrams rapidly get very complicated but it has to work the way I have suggested.

2016-12-02
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

You were responding to me, not Glenn.  And thanks for the quick reply, that I can hopefully work with.  Most of my posts are made soon after the posts they reply to, but since I am not "pro status" my posts need to be reviewed and accepted.  Resulting in them piling up for several days and finally getting posted all at once, after several days, like my last 3 posts.

I more or less said to Derek:

"In your way of thinking, is there an observable difference in the operating brain (after the optic nerve) between these two people (one who is "seeing different colours" [due to the red/green inversion in the retina] and one who is not?)"

To which Derek responded with:

"I have no idea. But before I even bothered to think about it, you would need to tell me what this physical issue will tell us, one way or the other, about human consciousness."

Out of the two possibilities there is and is not a difference in the operating brain after the optic nerve, I would hope and assume that you would say there is a an observable difference.  Otherwise consciousness is epiphinominal, and is not approachable via science, since whatever is responsible for these different colours being seen by different people, can't be observed or distinguished with our physical observational instruments.

And part of whatever this difference is, is the neural correlate of you seeing the strawberry as redness, and an inverted person seeing the strawberry as greenness.  I always use glutamate as the neural correlate of redness.  So just substitute whatever you think is responsible for you seeing the strawberry with a redness colour, and replay all my posts with glutamate replaced with what you think this difference from the invert is.  In other words, instead of saying glutamate is responsible for me seeing the strawberry as redness, substitute and say (whatever this difference from the invert is) is the neural correlate of or responsible for you seeing redness.  And instead of saying that Glycine is the different neural correlate of greenness, like I do, say what the difference is in the inverted person, that is responsible for this person seeing the strawberry with a greenness coulour.

And also note that an abstracted word like "red" does not have this redness quality, and in fact, is interpreted as greenness by the inverted person.  Also notice that the surface of the strawberry, even though it reflects something like 650 NM light, does not have the greenness quality responsible for your invert seeing the strawberry being interpreted and represented is if it had a greenness quality, nor is it likely to have your redness quality.  The important point being that there are two different important physical qualities here.  The initial cause to the perception process (something reflecting 650 NM light), and a final result of the perception process (the quality of your knowledge of it)- and the two physical qualities are very different.  Their only relationship is the fact that your particular perception system is interpreting something that reflects 650 NM light with conscious knowledge that has a redness quality (or the strawberry is being interpreted as having a greenness quality by your invert)


2016-12-02
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn
RE: ”It appears in the sentence:”But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.”

Yes. But the obvious meaning of that statement is that “there is something it is like to be conscious”. And that’s how it’s been understood by large numbers of philosophers who cite Nagel as their authority. (Have a read though the literature - Chalmers, Block, dozens of others).

In the end, frankly, I don’t care much how it’s phrased. If you want it to read “There it is something it is like to be consciously experiencing”, that’s fine by me. It’s still vacuous nonsense, as I’ve shown in my proof (which no one, including your good self, has yet managed to come to grips with - though the reasoning is quite straightforward).

DA




2016-12-02
RoboMary in free fall
Reply to Brent Allsop
Hi Brett 
(Sorry for the name confusion)

RE: Out of the two possibilities there is and is not a difference in the operating brain after the optic nerve, I would hope and assume that you would say there is a an observable difference.  Otherwise consciousness is epiphinominal, and is not approachable via science..

My point, more or less exactly. You are simply assuming that human consciousness is explicable solely in scientific terms (via “physical observational instruments” as you put it).

But this assumption is highly questionable. And to have even the slightest chance of making it stick, you would need to begin by stating clearly what you think human consciousness is (so we could at least know what you think you’re explaining). I don’t recall you have done that.

DA. 




2016-12-04
RoboMary in free fall
Hi Jo, 

You seem to have not understood any of the problems that I have pointed out with your theory. Even with the NAND gates, I am not sure you understood the problem I was pointing out. Perhaps you could paraphrase it, and then try to link the analogy with the problem I was pointing out. Here is another clue about the analogy, the mention of the internal recording of the inputs served a purpose, and the mention that the inputs were recorded was not a mistake. You can imagine that each NAND contains a hard drive which the recordings are kept on, or contains a person that writes the results in a book.

You write:

So any component of the brain can only know that the signal is supposed to mean red right of centre by the fact that the signal is coming along that path - i.e. by the site where it arrives, since paths to sites of arrival are fixed in brains.
You seem to be suggesting that the signal would actually represent red (it could be traced back to some cone firing perhaps) and that the cell would "know" that. You mention that paths of sites of arrivals are fixed in brains. But is there any evidence that for all cells of a certain type in all brains in which they are found, signals on certain sites always mean red? It seems like your theory is scientifically testable.

Yours sincerely,

Glenn


2016-12-04
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
I repeatedly pointed out the problem with your proof:. You had not considered that what was like the conscious experience was the description. You eventually wrote:

RE: “Nagel indicates what feature of reality is being referenced by the term consciously experiencing.”

Actually, he doesn’t use the phrase “consciously experiencing”. That’s your odd terminology. The Nagel proposition is (I repeat): “There is something it is like to be conscious”. Exactly that. Not something bizarre variation of it.

When I pointed out to you that you were wrong on that also (you had specifically stated that it was exactly as you stated it, and that you were not paraphrasing) you wrote:

RE: ”It appears in the sentence:”But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.”

Yes. But the obvious meaning of that statement is that “there is something it is like to be conscious”. And that’s how it’s been understood by large numbers of philosophers who cite Nagel as their authority. (Have a read though the literature - Chalmers, Block, dozens of others).

I have no problem with you interpreting it like that, as long as you are interpreting an organism being conscious as an organism consciously experiencing. As I mentioned earlier, I assume no significant difference between "having conscious experience" and "consciously experiencing". Any way, as I mentioned I have repeatedly pointed out why your proof did not work. You had misunderstood what he was writing. You had not realised he was identifying a feature (if you are in doubt of that look through the literature and notice that they use Nagel's identification to identify a feature). 

I have made the point that he was identifying a feature repeatedly, and you seem to just ignore it, and claim that it is just that neither me nor anyone else has got to grips with your proof (posted in post http://philpapers.org/post/23550 ). 

I think you are in danger of being delusional.

Yours sincerely,

Glenn



2016-12-04
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,I a afraid your explanation of the problem you have found with my model is completely obscure. You talked about a single NAND gate but now you are suggesting it has a storage capacity - which would mean a whole push down stack automaton or Turing machine as far as I can see - not 'just a NAND gate. And you give no indication why this is relevant. You need to be more transparent.

I agree that my suggestion that my description of 'knowing what a signal is supposed to mean' invokes some epistemic complications but I certainly do mean that the signal represents 'there is red right of centre' (note that it almost certainly cannot just mean 'red' which cannot be made use of in a logical inference but that is another issue). 

As I indicated before the idea that signals at certain sites always mean 'red at right of centre' is fundamental to all clinical and academic neurology. It has been documented in detail that within the superior colliculus and geniculate bodies signals at specific sites of arrival always mean certain colours at certain places There are beautiful pictures of how the colour mapping is 'tiled' over the spatial mapping. The same applies to cortex but as expected it gets more subtle So the most famous brain experiment of all is that of Hubel and Wiesel who showed that signals arriving at one particular site where they had put an electrode always meant a contrast along a line at a specific angle to the vertical. When I do a neurological examination I touch a patient's index finger with cotton wool because I know that signals meaning a touch to the index finger always pass along the median nerve and then the sixth cervical root to be received in the dorsal part of the relevant spinal cord segment and so on.

If you think about it, if you have an internal signalling system with a uniform signal currency, as in a computer or a brain (electrical potentials in both), the meaning of a signal to any part of the system can only be encoded in the place and timing of arrival. For semiconductor gates it is just timing and a good reason for thinking that there is nowhere in a computer that receives signals that are 'like something' in anything like the sense we talk about is that it is implausible that a single electrical impulse, or absence of one would be like anything in any sense similar to sunsets being like something to some component of our brain. 

The problem we have at the moment is that many people in the neuroconsciousness industry want to suggest that arrays of signals are 'like something' while they are in the process of being sent around. But information theory does not allow a signal to have a meaning to anything until it has arrived at that something. If it does not have a meaning to anything it arrives at it is not a 'signal' in any valid English language or technical sense. It is just a perturbation in the universe. It may be very awesome that patterns of electrical potentials feel like sunsets and heavy metal music but at least if there is  something that gets the pattern as an input the 'feel' can be explained in terms of the dynamic relations between the elements of the input. If the pattern of signals is just buzzing across a brain without having reached anywhere there are no relevant relations to encode meaning in because, (1) they are not involved in the same interaction and (2) they are not yet functioning as signals.

2016-12-04
RoboMary in free fall
Reply to Glenn Spigel

Hi Glenn

Re: ”I repeatedly pointed out the problem with your proof:. You had not considered that what was like the conscious experience was the description.”

And as I pointed out (a) Nagel expressly excludes the “like” of comparison - which you are using here - and (b) your statement is, in any case nonsense. How can anything be “like the description”? A description of what? When I asked you that, you told me that “the description” was “consciousness” - or “consciously experiencing” to use your odd phrase. So, that meant consciousness was “like consciousness” which is (a) again a “like” of comparison, and (b) self-evident gibberish.

That’s as far as we got.  Not only was it not a refutation of my proof; it suggested you had not even read my proof properly, let alone understood it.  But as I said, you’re in good company. Others on other threads have fallen by the wayside too, although as I say it’s just a bit of straightforward language analysis – the kind of thing that should be meat and drink to a reasonably alert philosopher.

(By the way if anyone else wants to check out the proof in question, I copied it onto this thread. It’s dated 2016-11-14. All comments welcome, as long as they are to the point. Interestingly, I discovered by accident that Peter Hacker thinks very much along the same lines. It was nice to see that at least one other person had applied their mind to the Nagel nonsense. Few seem willing to do so - although hundreds dutifully recite it as if it were holy writ. )

RE: “You had not realised he was identifying a feature’”

 Oh dear, more of the same.  Once again what “feature”?  “Consciously experiencing” perhaps!!??

DA

PS I think I might call a halt to this conversation, Glenn. We’re just going around in unproductive circles and I note also you are getting a little ad hominem and I find that unpleasant.  


2016-12-04
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
I just happened to come across a statement of the principle of signal meaning being encoded by site and time of arrival. It comes at the top of page 74 of Randy Gallistel's 'Memory and the Computational Brain'. Randy is both a biologist and an authority on computational linguistics. Many might say he is the leading authority on signal coding in the brain when it comes to computational theory. He says:

' This principle has long been recognised in neuroscience. ... now called place coding. Place coding means that it is the locus of activity that determines the referent of neural activity, not the form of that activity. Activity in the visual cortex is experienced as light no matter how that activity is produced.. usually  produced by light falling on the retina ... but can be produced by pushing on the eyeball. ...'

I admit that he couches this place coding in terms of 'activity' without reference to arrival but hopefully I have convinced you in previous posts that for the signals to be like something to something the relevant 'place' must be arrival at whatever experiences them. The fact that Gallistel does not himself take this on board may be why after nearly 300 pages he finds no way to explain what he wants to explain. 


2016-12-05
RoboMary in free fall
Hi Jo, 

I suspected you had not understood the analogy. I will try to explain it further. A NAND gate is just a concept. A person raising their hands could function as a NAND gate, or a modern computer could (it would be a simple program to write). So firstly do not assume anything about the implementation other than it will function as a NAND gate and that it will record the inputs. If you find that confusing and would like a more "concrete" scenario then imagine a modern computer is being used to implement each NAND gate in the arrangement.

The NAND gate implementation recording the inputs is analogous to it being like something to be the neuron in your theory.

The NAND gate arrangement attached to the speaker is analogous to the neural arrangement in the human (attached to the mouth). 

The speaker announcement that the NAND gates that made up the arrangement record the history of what their inputs are, is analogous to the human in your theory announcing that the individual neurons in its brain are consciously experiencing subjects. 

The issue in both cases is that the components do not pass signals that indicate the truth of the statement.  The statement in both cases is due to the component arrangement not because it is as stated. 

Does that help at all?

You mention in post  http://philpapers.org/post/24634 that Randy Gallistel's wrote:

' This principle has long been recognised in neuroscience. ... now called place coding. Place coding means that it is the locus of activity that determines the referent of neural activity, not the form of that activity. Activity in the visual cortex is experienced as light no matter how that activity is produced.. usually  produced by light falling on the retina ... but can be produced by pushing on the eyeball. ...'


It seems to me that all Gallistel' is pointing out is that the place in the arrangement determines the referent of neural activity. A computer designer might take the same view with a computer, and point out that a certain section is dealing with pattern recognition etc. With such theories the neuron or perhaps NAND gate could be transferred to another part of the system, and the referent of the activity it is involved in will change. 

You though are not in agreement with that, you are suggesting that the location of the neuron in the arrangement is irrelevant and thus the content the activity of the neuron refers to would remain the same even in a neuron-in-a-vat context. Though you seem to be slightly confused as you also write (in the post I am responding to):

If you think about it, if you have an internal signalling system with a uniform signal currency, as in a computer or a brain (electrical potentials in both), the meaning of a signal to any part of the system can only be encoded in the place and timing of arrival.
Because what is important in a computer system regarding the meaning of the signal is the function the system is performing and the location of the signal within the system. And that does not seem in line with your theory. You also mention 

But information theory does not allow a signal to have a meaning to anything until it has arrived at that something. If it does not have a meaning to anything it arrives at it is not a 'signal' in any valid English language or technical sense. 
But I am not sure where you are getting that from, as I am not familiar with the concept of "meaning" in information theory. Perhaps you could explain further how you would like me to interpret that. Also I do not think that a signal has to be received to be a signal. The sentence "They never received the signal" makes sense. 

I had written in my last post to you:

You seem to be suggesting that the signal would actually represent red (it could be traced back to some cone firing perhaps) and that the cell would "know" that. You mention that paths of sites of arrivals are fixed in brains. But is there any evidence that for all cells of a certain type in all brains in which they are found, signals on certain sites always mean red? It seems like your theory is scientifically testable.
You did not seem to respond to the question. Though I did notice that you did write:

As I indicated before the idea that signals at certain sites always mean 'red at right of centre' is fundamental to all clinical and academic neurology. It has been documented in detail that within the superior colliculus and geniculate bodies signals at specific sites of arrival always mean certain colours at certain places 

I however was assuming that by "certain sites" you were referring to certain locations of the neurons in the arrangement such as a neuron's location within the superior colliculus for example. Which is the kind of idea that I assume Gallistel would agree with. But I am asking you about evidence "that for all cells of a certain type in all brains in which they are found, signals on certain sites always mean red?" To be clear by certain sites I mean certain synapse locations on neurons. Do you understand the question, and was I correct that you had not previously responded?

Yours sincerely, 

Glenn


2016-12-05
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
You wrote:

Re: ”I repeatedly pointed out the problem with your proof:. You had not considered that what was like the conscious experience was the description.”
And as I pointed out (a) Nagel expressly excludes the “like” of comparison - which you are using here - 

And I pointed out in post http://philpapers.org/post/24098 that you were wrong and he did not. 

You went onto write:

and (b) your statement is, in any case nonsense. How can anything be “like the description”? A description of what

A description of the conscious experience, as I have reportedly pointed out. 

You then went onto write:
When I asked you that, you told me that “the description” was “consciousness” - or “consciously experiencing” to use your odd phrase. So, that meant consciousness was “like consciousness” which is (a) again a “like” of comparison, and (b) self-evident gibberish.
I would never have intended to write that the description "was" consciousness. There is a clear distinction between claiming a description is of something, and claiming the description is that something. And your ceasing the conversation will probably mean that you will not support your claim that I wrote that the description was consciousness by pointing out where I wrote that.

As for your feeling that I was "getting a little ad hominem" I only stated that "I think you are in danger of being delusional" I never stated that you were delusional. And I only wrote the former because you are claiming that no one has ever shown a problem with your proof, when I blatantly have, and you just ignore it.  But you ignoring responses was demonstrated again in your last post: You repeated a claim that 'Nagel expressly excludes the “like” of comparison' when I have already quoted what he wrote, and pointed out that he did not ( in post http://philpapers.org/post/24098). And in your response in post http://philpapers.org/post/24110 you did not contest that. Though you can notice there that I had pointed out to you that you kept misquoting Nagel, but that did not prevent you from going through the same loop again and claiming in post  http://philpapers.org/post/24446 that 

The Nagel proposition is (I repeat): “There is something it is like to be conscious”. Exactly that. Not something bizarre variation of it.
Meaning I had to point out to you again in post  http://philpapers.org/post/24550 what he actually stated. With hindsight can you not notice that you keep going around in the same loops even though the mistakes you were making in the loop had already been pointed out? (that is rhetorical as I agree that we can halt the conversation though if you did have an example of where I wrote that the description was consciousness then I would appreciate that as it would show that you were not just making things up). 

Yours sincerely, 

Glenn

P.S.

Your earlier claim that thought experiments that I and other philosophers use were "childish silliness" could also be regarded as a "little ad hominem"  






2016-12-05
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

You said:

"you would need to begin by stating clearly what you think human consciousness is (so we could at least know what you think you're explaining). I don't recall you have done that."

Part of being conscious, is us having conscious knowledge of our 3D surroundings, including knowledge of ourselves at the center.  This conscious knowledge is a 3D model of us, and the space we occupy, in our brain.  We believe this knowledge to be reality, itself, but that is just s simplified model or seeming or optimization and this simplified model doesn't fully represent the more complicated way things really are (we are really only aware of our knowledge of reality, not the reality itself).  This 3D model is composed of phenomenal qualities - diverse qualities including all colors, sensations, sounds, and so on.  Each of all of these diverse qualities can be broken down to an elemental quality level (along the lines of there are primary colors).  There must be a neural correlate, or something in our brain, that has each of these elemental qualities, of which our brain is using to construct all this knowledge.  All these elemental qualities can be bound together by our brain to construct composite qualities, and everything we perceive and are aware of at any given moment in time.  We are aware of all of this 3D model (and how it is all qualitatively different) all at the same time.  We can say with absolute certainty that the redness knowledge located on the surface of the strawberry, of this 3D model is qualitatively different than the greenness knowledge located in the model on the surface of the leaves since we are aware of both of these qualities at the same time.

The testable prediction is that there is a 3D model of something, possibly firing neurons, in the brain, that is this 3D conscious knowledge.  The prediction is, that the causal properties of a neural correlate of a particular voxel we can detect, on the surface of the 3D model of the strawberry is one and the same is the causal properties of the elemental redness we experience it has.  The prediction is, that we can already observe these causal properties of these qualities with our physical instruments, and that the knowledge we gain from these instruments and senses, is abstracted information that does not have the quality we are attempting to observe.  This abstracted knowledge we are receiving about what we are observing can describe everything about these qualities, but only if we know how to properly qualitatively interpret it.  The prediction is, that we are already observing most all of these qualities, and like Robo mary, know most everything about them.  The only problem is, we don't yet know how to properly qualitatively interpret all our abstracted knowledge of everything about redness and so on, and this lack of knowledge of how to qualitatively interpret what we are observing is the only thing that is preventing us from being able to eff the ineffable and know if someone else's redness is the same as ours.

All of consciousness includes much, far beyond just what we perceive.  And what I am talking about is just the qualitative subset of that conscious perception process, so I'm not talking about all of consciousness, just the qualitative nature of what we consciously perceive and are aware of.

Does that help to describe what I define "Qualia" or the qualitative nature of consciousness to be?


2016-12-05
RoboMary in free fall
Reply to Glenn Spigel
Sorry, Glenn, but your last post is full of non sequitur arguments in relation to what i have been saying. I agree that a an internal truth evaluation step is needed to underpin the validity of the concept of 'reporting an experience'. The pathways needed to do this are complex but I have  described how I think they must work in two essays on my website. One is called ' Response to Poeppel and Embick" and the other, which covers more or less my entire world view is called Reality, Meaning and Knowledge (300 pages). There is nothing confused or inconsistent in what I have been saying. It is that you keep jumping from one context to another without thinking through the implications for differences and similarities. You cannot move a neuron around for instance. And I see nothing in what I am proposing requiring any recording of an experience at the site of experience. If you want to analyse this sort of thing you have to build a complete model of pathways involved and they are far from simple.

2016-12-05
RoboMary in free fall
Reply to Brent Allsop

Hi Brent

RE: Part of being conscious, is us having conscious knowledge of..

You are defining something in terms of itself.

DA


2016-12-05
RoboMary in free fall
Reply to Glenn Spigel
Hi Glenn

More unproductive circles...

While I'm here, let me once again offer my demonstration of why the Nagel proposition is nonsense. I invite comments from anyone - as long as they’re to the point. (And just to set your minds at rest, I discovered after originally posting it that Peter Hacker - a philosopher of some note - shares my views - though I take the issue a stage further than he does.)  

 Why do I think this matter is important? Because large numbers of philosophers trot the Nagel proposition out as if it told us something basic about human consciousness. I maintain that once you subject the thing to a little straightforward analysis, you quickly see that, despite appearances, it is in fact vacuous nonsense.

Here again is my proof. Apologies for the length. No way really to make it shorter. (I call the proposition a “mantra” because it is recited m mindlessly..)   

Having offered to give an explanation of why I think the “something it is like” mantra is useless, I thought I should put my money where my mouth is and give it. I’ve done so a couple of times before on other Philpapers threads and been largely met with blank looks (metaphorically speaking). I do hope that doesn’t happen here. [Alas, it did...]
The proposition we have to deal with, in its time-honoured form, is: “There is something it is like to be conscious.” This, according to numerous philosophers, great and small, tells us something important about the nature of human consciousness. It’s often invoked by philosophers of consciousness when they make their introductory moves defining what they’re talking about. It seems to have widespread acceptance and approval. But let’s just have a little look…
First, if we are serious philosophers, especially serious analytic philosophers, we’ll surely want to be clear about the meanings of the words we’re using. So what do we mean by the words in the proposition in question?
Clearly, the word we really have to focus on – the word that’s crucial to the proposition – is “like” (the rest are quite straightforward). So what do we mean by “like” here?
If we think about the range of meanings the word “like” can bear, there seem to be two possibilities in the context (please correct me if I am wrong). There’s the “like” of comparison or similarity, as in “like a diamond in the sky”; and there’s the like of “feels like” as in “I feel like a cup of tea”. The proposition in question could bear either of these meanings (which is a problem in itself, but I’ll come back to that). So “There is something it is like to be conscious” could mean “There is something it is similar to to be conscious” or “There is something it feels like (doing etc) to be conscious”.
(I should interpose here that someone on another thread once pointed out to me that Nagel ruled out the first alternative in a footnote to his article, but let’s keep it in for the time being for the sake of completeness.)
So, taking each proposition in turn, what firstly can we make of “There is something it is similar to to be conscious”? The obvious response to that statement is “Really? So what is it similar to?” And there’s the problem. Someone might perhaps answer “awareness” or “perception” or “mindfulness” or some such, but that’s just a little game of near-synonyms taking us nowhere. And obviously the mere fact of being similar to something is of no consequence since just about anything can be said to be similar to something else.
So the like of similarity/comparison gets us nowhere. When we insert this meaning, we end up shrugging our shoulders at the vacuity of it all. (And, as I say, Nagel himself ruled it out anyway.)
So let’s try the other meaning.In this case, “There is something it is like to be conscious” means “There is something it feels like (doing) to be conscious”. It’s not a matter of similarity now; it’s “like” as in inclination – e.g. (feel) like having a cup of tea.
So, what could it feel like (doing, being, etc) to be conscious? The question borders on the absurd, doesn’t it? Obviously, it could feel like anything – from having a cup of tea, to going on a holiday, to slitting one’s throat.
So the like of “feels like” get us nowhere as well.
AND THOSE ARE THE ONLY TWO POSSIBLE MEANINGS OF “LIKE” IN THE CONTEXT. THERE ARE NO OTHERS. (Again, pls correct me if I am wrong).
This is why I say that the proposition in question is vacuous. Both possible interpretations of the word “like” take us nowhere. They give us a proposition that is either empty or near-nonsensical.
Well, one might say, why doesn’t this fact seem readily apparent when we first encounter the proposition? Why have so many people been taken in by it and claimed it meant something important?
I think the answer lies in the point I mentioned above – that, due to the odd phrasing of the proposition, “like” can bear two different meanings. Unless one undertakes the kind of analysis I’ve given above (the kind of analysis that should, surely, be almost instinctive for a philosopher – especially one of the “analytic” persuasion), one is easily bamboozled and led to believe that something deep and important is being said. (“Gee, yes! something it is like.”) This becomes obvious once you straighten out the syntax. If I say “Being conscious is like something” or “Being conscious feels like (doing) something”, one is immediately more wary. “What is it like?” one immediately wants to say? Or “What does it feel like doing?” The twisting of the syntax in the time-honoured formulation blurs the two meanings and deflects these obvious and quite sensible reactions.
If anyone thinks this reasoning goes wrong somewhere, I invite them to tell me where. But please don’t tell me I’ve analysed the proposition too closely – as I seem to remember someone once did. That reaction doesn’t befit a philosopher worthy of the name. And if you think the “like” in question means something different from what I’ve said, please tell me what that meaning is, how it fits the context, and why it makes the proposition is question important.  Oh, and please spare me too many references to Nagel's article. As I said in a recent post, I've seen multiple different interpretations of what he's saying, and in any case it's the "something it is like" formula itself that philosophers mostly rely on. Nagel is just mentioned (occasionally) as "authority" for it.


2016-12-06
RoboMary in free fall
Hi Jo,
I was not suggesting any recording of an experience at the site of experience. Perhaps read what I wrote again and you will notice that. Should I assume that you are still not able to understand the analogy?

Thanks for the mention of your essays, but before reading them, could you let me know whether they explain how the individual neurons consciously experiencing is relevant to the report of an experience?  If so, then perhaps you could mention where.

I think it would be useful if you think an argument in non sequitur that you point out what argument you are referring to and why you think it does not follow, because it might be a case of you not understanding the point.

Could you perhaps answer the question (that I have asked you twice before) about what evidence there is that for all neurons of a certain type signals to certain synapse locations on them always have a certain meaning e.g. red. So could you just name a type of neuron, and give an example of a certain synapse location on it always having a certain meaning no matter where that neuron is found in the brain (preferably use a neuron that is found in multiple areas in the brain)? 

In case you were going to suggest that it does not follow from your theory that such neurons should exist, I am assuming it does because you have stated that a neuron-in-a-vat would experience the same if the synapse activity was the same, and you also wrote:

As to what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials. 


Yours sincerely, 

Glenn

2016-12-06
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
You "proof" includes the text:
(I should interpose here that someone on another thread once pointed out to me that Nagel ruled out the first alternative in a footnote to his article, but let’s keep it in for the time being for the sake of completeness.)


As I have pointed out (more than once) before, Nagel does not rule out the "like" of comparison. I explain that in post http://philpapers.org/post/24098

When Nagel states:

But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.

The "like" is one of comparison. People have tried to describe what synesthesis is like for them for example. Another example would be a person who was given an hallucinogen and attempted to describe what the subsequent experience is like.  A person would relate the description to what they have consciously experienced to attempt to make sense of it, but as Nagel points out, what it is like is not what that description means to them, but to the person giving the description. That does not imply that all conscious experiences are very similar, two experiences can be quite different. So it is not just saying conscious experience is like conscious experience, rather a conscious experience can to some extent be described to give an indication of what it is like to a person that can make sense of the description. At the moment a description of my current conscious experience could include experiencing having a human form, and typing on a computer. 

Yours sincerely,

Glenn
  


2016-12-06
RoboMary in free fall
Reply to Glenn Spigel

RE: "As I have pointed out (more than once) before, Nagel does not rule out the "like" of comparison."

See: Nagel, Note 6: “Therefore the analogical form of the English expression ‘what it is like’ is misleading. It does not mean ‘what (in our experience) it resembles,’ but rather ‘how it is for the subject himself.’

Resembles = comparison, n’est-ce pas?

As for the rest, “how it is for the subject himself," that’s gibberish. He’d saying: “Consciousness is how it is for the subject himself.” Which would, on the most charitable reading, take us back to square one: What does “how it is”? mean (if we rule out “what does it resemble?”)? The whole thing is twaddle. And I think he must have vaguely realised it. Hence a silly, equivocal footnote like this.

All of this is so obvious! Why have so few philosophers picked it up? What on earth has happened to philosophy? Doesn’t anyone examine language closely anymore? Just any old thing will do?  

DA


2016-12-06
RoboMary in free fall
Reply to Derek Allan

Hi Derek,

You said: "You are defining something in terms of itself."

Then I haven't yet explained the rest of it in sufficient terms for you to understand.  All of consciousness includes much more than I am talking about.  I was merely attempting to point out that I will only define a subset of consciousness - that subset being the conscious knowledge of ourselves, within the knowledge of our surroundings, based on the data our senses collect.  That was not the definition, just a clarification of what part of consciousness I am and am not defining.  Then I proceed to define what this subset - the conscious knowledge of our perception is (not in terms of itself).  I point out that it is a physical 3D model, within our brain, composed of voxel elements represented by neural correlates, possibly firing neurons.  I point out that each voxel of this physical 3D model has specific physical qualitative properties (like redness).  I point out that this 3D model of voxel neural correlates is bound together with a binding system to form a complete 3D model we can be aware of all at the same time.  Our conscious knowledge of all of this is this 3D model, made of physical things in our brain that have phenomenal qualities.  We can detect these voxel elements, and everything about them, with our instruments, but the information our detection instruments provide is abstracted information about the qualities.  In order to know the qualitative nature of what we are observing, we need to know how to qualitatively interpret the abstracted information our instruments and senses are providing to us.


2016-12-07
RoboMary in free fall
Reply to Glenn Spigel

Dear Glenn,

 

I do not know quite where my explanation of the mechanism of reporting comes in the essays and I would be loth to send you to those bits alone because they will make little sense unless you have digested the frameshifts in thinking that come earlier. The Poeppel and Embick essay is quite short and covers much of the relevant ground and not much else.

 

But let try to explain a bit here.

 

As I see it we have to propose that ‘reporting an experience’ is an indirect process that is not one experience directly causing a report. If you ask me what I can taste I may say ‘mint’. In the intervening second or so some subjects in my brain will have experienced a minty taste. At the same time computational events will have generated the speech ‘mint’ much as a computer generates ‘printer ink is low’. It is likely that these events include integrations in cells that we could describe as experiences of mintiness but it is not clear that there is a fact of the matter about that. All sorts of integrations will occur at all sorts of levels of abstraction and multimodal combination (or otherwise) and in a second there may be a series of fifty such events. And none of them are verifiable as events that ‘are like something to something’.

 

But as you say, we need a truth evaluation step somewhere. I think we may be on firmer gound in saying that the experiential events we think we ‘report’ certainly include events in which we hear ourselves saying mint and continue to taste the mint and there is some sort of analysis of congruence that stops us saying, ‘no, sorry I meant basil’. Language works because our brains learn to associate words and sensations and there is constant re-checking of consonance both in terms of what others say and what we say.

 

This is analogous to the Libet situation. People are surprised that in Libet’s experiment the subject seems to be conscious of a decision after it is made. Yet that pretty much has to be so if consciousness is part of the causal chain (the very presumption they think is challenged). A conscious event that leads to a decision cannot be aware of that decision if the decision is caused by the aware process. But this in no way stops conscious events being necessary to that sort of decision-making (they do not need to be necessary in non-biological simulations with the same input-output rules). Making a ‘conscious decision’ is a long chain of (maybe 20msec) thought events each of which involves being conscious of some of the output of the previous one. So without these conscious events you do not get the decision.

 

The interesting question about simulating such events in a computer is where you are going to have the ‘consonance judging’ or truth evluating events. Computers can only find that 0=0 or 1=1. It may be that neurons have events that do something much more complex, crucially with the format of information judged as consonant being different, as in: 3 x 7 = 21, or mint = this taste. In crude terms that is what quantum computation can do, although I think that is a red herring.

 

 I think you have picked up on the fact that when I say meaning is encoded in site of arrival that is not the site of the recipient cell in the brain or vat but the site of the synapse within the single dynamic relation of postsynaptic intergration in the receiving neuron. Where the neuron is cannot matter but where a synapse is within a dynamic relation is just the sort of difference that could give us a combinatorial range of meanings.

 

The evidence for sites of synapses mattering to meaning is as far as I know sparse. This is an area just opening up. Paul Tiesinga has published relevant data. The best clue is perhaps David Marr’s analysis of Purkinje cells in the 1980s. These cells have different sorts of inputs from different sorts of cells with very different conformations. I think we can only make sense of meaning if the same applies to cortical cells but it is very hard to establish the microanatomy.

 

Moreover, as said before, it is not essential that exactly the same position in exactly the same dendritic structure should mean ‘there is red to the left’ (again I emphasise that ‘red’ is not a meaning in this context). All that we need is that within the maths of the dynamic relation of integration the site has the appropriate functional capacity. Even for cells of a certain functional type in a tiny patch of cortex we do not need ‘what it is like’ to experience ‘red to the left’ to be the same, even if one could meaningfully say that. Each can have a private language for comparing signals for consonance and nobody would ever know. (But you probably could not swap colours for tastes.)

It would remain true, however, that if you very carefully moved such a cell to a vat and supplied it with perfect presynaptic bouton replicas for inputs one can work on the assumption that its experiences would be the same as they werein the brain for any given input.

 

 


2016-12-07
RoboMary in free fall
Hi Jo, 
Regarding my question:

...what evidence there is that for all neurons of a certain type signals to certain synapse locations on them always have a certain meaning e.g. red. So could you just name a type of neuron, and give an example of a certain synapse location on it always having a certain meaning no matter where that neuron is found in the brain (preferably use a neuron that is found in multiple areas in the brain)? 


You responded:

The evidence for sites of synapses mattering to meaning is as far as I know sparse. This is an area just opening up. Paul Tiesinga has published relevant data. The best clue is perhaps David Marr’s analysis of Purkinje cells in the 1980s. These cells have different sorts of inputs from different sorts of cells with very different conformations. I think we can only make sense of meaning if the same applies to cortical cells but it is very hard to establish the microanatomy.


From wiki ( https://en.wikipedia.org/wiki/David_Marr_(neuroscientist) ) I read: 

The cerebellum theory[2] was motivated by two unique features of cerebellar anatomy: (1) the cerebellum contains vast numbers of tiny granule cells, each receiving only a few inputs from "mossy fibers"; (2) Purkinje cells in the cerebellar cortex each receive tens of thousands of inputs from "parallel fibers", but only one input from a single "climbing fiber", which however is extremely strong. Marr proposed that the granule cells encode combinations of mossy fibre inputs, and that the climbing fibres carry a "teaching" signal that instructs their Purkinje cell targets to modify the strength of synaptic connections from parallel fibres. Neither of those ideas is universally accepted, but both form essential elements of viable modern theories[citation needed].


Is this the kind of thing you meant (Marr proposing that the granule cells encode combinations of mossy fibre inputs and the the climbing fibres carry a teaching signal to the target cells)? Because I do not see there any mention that certain synaptic locations always have certain meanings. Could you perhaps paraphrase what David Marr wrote that you thought was evidence (or even just a clue) that certain synaptic locations always have certain meanings? Are you suggesting for example that if one were to investigate the location of parallel fibres on Purkinje cells that would reveal that there will be a correlation between the fibres location on the Purkinje cell and what the signal represents (across all Purkinje cells)?

Regarding your theory about how the individual neurons consciously experiencing is relevant to the report of an experience. I had mentioned earlier (back in post http://philpapers.org/post/24398):

The subjects for which it is like something in your theory are neurons, which do not individually control the human, nor communicate what it is like to be them to any other subject. 


Though you did not seem to understand the issue, and interpreted what I meant in what I regarded as a rather strange way. But since you brought information theory up, perhaps I could rephrase. You earlier, in the first post to me on this thread ( http://philpapers.org/post/21758 ), wrote:

We probably need 100-1000 degrees of freedom for the event to cover all the experiences we can describe. 


I am not totally sure what you meant by degrees of freedom, so instead I will relate it to information theory, that you had brought up. A 2 input 1 output NAND gate can receive 4 messages and output 2. The neuron can receive way more, but how many messages can it output, and how does this compare to how many messages it would require to be able to convey what its experiential content was? In case I have not expressed this clearly enough, consider the amount of bits of Shannon information to express its experiential state (which you suggests requires (100-1000 degrees of freedom) ).

Yours sincerely,

Glenn


2016-12-07
RoboMary in free fall
Reply to Derek Allan
Hi Derek, 
You wrote

RE: "As I have pointed out (more than once) before, Nagel does not rule out the "like" of comparison."

See: Nagel, Note 6: “Therefore the analogical form of the English expression ‘what it is like’ is misleading. It does not mean ‘what (in our experience) it resembles,’ but rather ‘how it is for the subject himself.’
Resembles = comparison, n’est-ce pas?

The full paragraph that the part you quoted from me read as: 

As I have pointed out (more than once) before, Nagel does not rule out the "like" of comparison. I explain that in post http://philpapers.org/post/24098
And in the post I quoted, I quoted footnote 6, and gave the context, and pretty much pointed out that what Nagel was saying was that it is not what (in out experience) it resembles but what it resembles in the subject's experience. 

And as I explained in the post you were replying to he is not just suggesting that conscious experience is like conscious experience, as what the conscious experience is like can vary over time, and so the subject's description of what their conscious experience is like can vary.

I think we can halt the conversation here, as again you are just making claims of problems which you have made before and I dealt with, and you are omitting quoting where I dealt with them, and not rebutting my dealings with them, but just wasting my time by remaking the claim, and leaving it to me to point out where I have rebutted it. So if that is all you are capable of doing (as opposed to rebutting my rebuttal) then that is fine, but I see no point in continuing the conversation.

Yours sincerely,

Glenn 

2016-12-08
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
As indicated before, Marr's model only gives us a faint hint that we can see different synapses being used to convey different sorts of meaning. The default model, known as integrate (really summate) and fire, holds that it does not matter which synapse is used for an input - the cell just adds up the number of puts and fires when it is above a threshold. Marr's analysis indicates that at least in cerebellar cells different synapses are used to convey different meanings to the cell. In simple terms the mossy fibre inputs might mean 'you put your right leg in, you put your right leg out...' and the inputs from climbing fibres, I think from olivary nuclei, would mean 'learn that'. This would obviously be a class difference in meaning rather than a difference such as 'here is red' versus 'here is blue'. But it makes the point that the default model almost certainly is wrong in some cases. That is as far as the literature took us until around 2009, apart from evidence from people like Koch and Segev on insect neurons. They showed that summate and fire can be wrong and that cells can recognise patterns of inputs rather than strengths of inputs. More recently the non-summative aspect has been explored in terms of fine-level timing differentials. But all we can say is that it is plausible that it matters which synapse is used for an incoming meaning. My reason for proposing that this has to be the case rests on a priori arguments as laid out in my Frontiers in Psychology paper on representations as inputs. There may in fact be a whole lot more relevant data generated in other near subspecialties but one of the problems of neuroscience is that it is so complex and divided up into so many sub communities that one often misses relevant literature. 

I do not follow your claim that cells do not individually control the human or communicate what it is like to be a subject to other subjects. I doubt that the word control is helpful here. It can help when dealing with servo systems and feedback but I think you are simply meaning 'cause to act' here. We know that in real dynamics nothing is ever caused by one action. All events are caused by the totality of preceding circumstances impinging on the event. The concept of 'an agent' is dubious. In real life our actions will be caused by the totality of neural events, but at a fie grained level there will be actions that would not have happened if one particular cell had not fired. So as far as I can see neurones 'individually control' human behaviour in much the way that anything ever does.

Moreover, I see no reason to doubt that the cells communicate what it is like to be a human subject  through the mechanism I indicated before if nothing else. With an input meaning 'are minty and this taste consonant' and an output 'correct' the cell has communicated that it is sensing mintyness in the way it does. Similar signals coming to similar cells either in the same head or another head can evoke what mintyness is like to that other subject so that subject can be informed that the first cell has been sensing the equivalent what it is likeness for mint for that cell. There need be no inference that what it is like for one cell is similar to what it is like for the other. That is not what 'communicating what it is like' actually means in any context. The idea that somehow what it is like for a particular subject can be conveyed in the way that Shannon information is conveyed is just folk psychology.

When I say degrees of freedom I am assuming that it is clear that for a binary degree of freedom that provides one bit. If the degree of freedom provides many describable values along a scale we get more bits as implicit in Shannon type theory.

The input to a pyramidal cell is probably in the region of 50,000 bits simultaneously. There are a whole lot of ways one can argue that is too generous or possibly too restrictive. I tend to take the conservative view that the cell may have around 5,000 descriminable bits of input. It is generally agreed that even our visual experiences do not require more than that richness of coding. I have neuroscience colleagues who claim that a visual experience only needs 50 bits but I think this is implausible. I tend to think it must need 1000. That is around the right number. One of the reasons for not suggesting that experience is encoded in patterns of action potentials across the cortex is that at any one time there are probably millions of such spikes, or tens of millions. The total number of synapses being reached may be tens of billions. That is way to many to make any sense either in computational terms or phenomenal terms.

2016-12-08
RoboMary in free fall
Hi Jo,
You wrote:

Marr's analysis indicates that at least in cerebellar cells different synapses are used to convey different meanings to the cell. In simple terms the mossy fibre inputs might mean 'you put your right leg in, you put your right leg out...' and the inputs from climbing fibres, I think from olivary nuclei, would mean 'learn that'. 

I did not understand where that is indicated. In the bit I quoted previously from wiki

From wiki ( https://en.wikipedia.org/wiki/David_Marr_(neuroscientist) ) I read: 

The cerebellum theory[2] was motivated by two unique features of cerebellar anatomy: (1) the cerebellum contains vast numbers of tiny granule cells, each receiving only a few inputs from "mossy fibers"; (2) Purkinje cells in the cerebellar cortex each receive tens of thousands of inputs from "parallel fibers", but only one input from a single "climbing fiber", which however is extremely strong. Marr proposed that the granule cells encode combinations of mossy fibre inputs, and that the climbing fibres carry a "teaching" signal that instructs their Purkinje cell targets to modify the strength of synaptic connections from parallel fibres. Neither of those ideas is universally accepted, but both form essential elements of viable modern theories[citation needed].


It did not seem to indicate that the location of the synapse was important, it instead seemed to indicate that the input from the climbing fibre was stronger (so strength of input not location) which seems perfectly compatible with a default model which rather than adding up the number of inputs, fires instead when there are enough positive ions at the axon hillock to reach the threshold to cause it to let in more positive ions. And before looking at any other evidence, I would like to examine this as you previously considered this piece of evidence to be the "best clue" that the location of the synapse on the neuron was important (rather than strength of input for example). 

Regarding the problem I was pointing out I had written:

I am not totally sure what you meant by degrees of freedom, so instead I will relate it to information theory, that you had brought up. A 2 input 1 output NAND gate can receive 4 messages and output 2. The neuron can receive way more, but how many messages can it output, and how does this compare to how many messages it would require to be able to convey what its experiential content was? In case I have not expressed this clearly enough, consider the amount of bits of Shannon information to express its experiential state (which you suggests requires (100-1000 degrees of freedom) ).

You wrote:

The input to a pyramidal cell is probably in the region of 50,000 bits simultaneously. There are a whole lot of ways one can argue that is too generous or possibly too restrictive. I tend to take the conservative view that the cell may have around 5,000 descriminable bits of input. It is generally agreed that even our visual experiences do not require more than that richness of coding. I have neuroscience colleagues who claim that a visual experience only needs 50 bits but I think this is implausible. I tend to think it must need 1000. That is around the right number.


Well with a computer a picture could use 24 bits per pixel (8 bits to indicate the colour intensity of the red, 8 bits to indicate the colour intensity of the green, and 8 bits to indicate the colour intensity of the blue) and so you would multiply that by the number of pixels. Though obviously there are techniques to reduce the amount of information. For example using 32 bits to encode the colour and how many pixels in a row (up to 256) have that colour for example. Now with neurons I assume the firing frequency will go to indicate intensity, but I still do not understand how a visual experience could always be done with only 50 bits (as your colleague suggested) even if the experience was of a 10 pixel by 10 pixel grid of black or white squares. I should think quite a few computer artists when working close up are seeing grids of over 100 x 100 with a variety of colours on their screens, and that is not the totality of their visual experience. I am perhaps getting a little off the point, but I want to relate it to where you wrote:

..., I see no reason to doubt that the cells communicate what it is like to be a human subject  through the mechanism I indicated before if nothing else. With an input meaning 'are minty and this taste consonant' and an output 'correct' the cell has communicated that it is sensing mintyness in the way it does.


But there are numerous inputs and in your theory different combinations would give rise to the neuron experiencing different content. So for the sake of simplicity imagine that there were 10000 different variations of experiential content that the neuron was capable of experiencing. How many bits of Shannon information to were you thinking it would take to express which variation of experiential content (such as the "minty taste" variation) it had, and how many does a neuron output?

Yours sincerely,

Glenn 



2016-12-08
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,Why not read my essays and judge from them? At the rate I am going I will end up writing more than in the essays here.

If you look at what I wrote I agreed that 50 bits was implausible. I suggested 1000-5000. But the coding is going to be nothing like pixels. That would be hugely inefficient as any compression algorithm writer knows. Most neuroscientists I know agree that encoding of visual experiences will be in signals with meanings more like words than pixels. In fact I am pretty sure each signal means a full proposition, as I have said. When I said 'there is red to left of centre ' that would not be encoding a pixel but a relation within the image. As another example a visual experience of five red roses is likely to be composed of signals for things being red, for the things being rose-type and for there being five of them. Nothing like pixels. Power Point uses this sort of encoding I suspect.

Strength is probably not encoded in rate. Rate works by increasing the chance of the signal catching the receiving cell at a 'sweet point' of response. It is more or less impossible to encode meaning in rate itself. 

The cell's output will be one bit but that will be enough to encode two to the power 5,000 possible experiences because all it needs to mean is 'yes, that pattern you have stored over there that you are asking me if it is that one'. That is basically how communication works - by deixis or pointing to a referent. The cunning part of language is pointing in a virtually infinite number of ways by the combinatorial use of signs. Within a brain all that is needed is a sign for 'yes, what you just signalled fits'. 

2016-12-10
RoboMary in free fall
Reply to Brent Allsop

Hi Brett

RE: All of consciousness includes much more than I am talking about.  I was merely attempting to point out that I will only define a subset of consciousness –

This runs into the same problem.  How can you know that what you’re describing is “a subset of consciousness” if you are not first able to say what consciousness is. It might well be a subset of something, but how do we know that the something is human consciousness?

DA


2016-12-11
RoboMary in free fall
Hi Jo, 

I noticed that you did not offer any information as to why the Marr analysis was evidence that the location of the synapse on the neuron was important (rather than strength of input for example). Which I found slightly strange considering you were considering it to be the "best clue" that that was the case.  

You wrote:

The cell's output will be one bit but that will be enough to encode two to the power 5,000 possible experiences because all it needs to mean is 'yes, that pattern you have stored over there that you are asking me if it is that one'. That is basically how communication works - by deixis or pointing to a referent. The cunning part of language is pointing in a virtually infinite number of ways by the combinatorial use of signs. Within a brain all that is needed is a sign for 'yes, what you just signalled fits'. 
I am not sure how you think a referent can be pointed in any communication. I am not suggesting it is not, I am just not sure how in your story you think it happens. I assume you could not explain it for a computer system. Anyway, I will ignore that for now.

So consider cell A which gives an output to cell B. 

1) Cell A receives inputs which describe one of possibly 5000 experiences, not simply through the value of the inputs but also their location on cell A. 

2) a "yes" or "no" message is then sent to cell B.

I am not clear how subject cell B has information as to which of the 5000 possibilities the "yes" or "no" from cell A refers to, but then I am assuming that it cannot be encoded into the synapse location as the "yes" and "no" to all 5000 possibilities arrives at the same location, and I am also assuming that the 5000 possibilities would not encoded any more efficiently for cell B than cell A, and I am assuming that there is no evolutionary advantage to having the same inputs as cell A plus the "yes" or "no" input that would result form those inputs anyway. So how are you suggesting subject cell B has information as to which of the 5000 possibilities the "yes" or "no" refers to such that its experience would involve the correct content?

Yours sincerely, 

Glenn


2016-12-13
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
I had hoped to have made it clear that empirical evidence for location of a synapse in a dendritic tree being functionally important is very hard to pin down and that my argument is largely based on theoretical necessity. If synaptic site did not matter then one might ask why the climbing fibres have to climb all the way up to where they attach but beyond that it is merely a clue.

To answer the question about how cell B has information about which pattern cell A is saying yes to I would suggest drawn-out some diagrams that cover the scenario. Remember that the axon of cell A will be sending branches to about 10,000 cells so in fact there is no single cell B here. I am not sure that any cell needs to know what the pattern is either. My point is that if you start to apply Shannonian theory to brain pathways that are massively divergent and convergent at every step some very odd things start to happen. If the job of cell A is simply to send a 'yes' signal or not and the function of the yes signal is to cause a large number of cells to repeat firing if they did before, so that the same pattern of firing is repeated then we can reasonably say that the output of A is either 'yes, that pattern' or 'no, not that pattern'. There is in a sense only one Shannon bit but it is a bit that specifies one of two to the power 50,000 patterns.

As I see it this goes to show that deixis is a very underrated aspect of communication. But the pragmatics experts have told us that for years. One person can ask 'would you like the steak medium rare with the sauce potatoes and carrots with a green side salad with a traditional french dressing?' and the other person can say 'yes, that' effectively using 'that' to fix reference to the predicate of the other's sentence. Meaning involves co-variation with part of the outside world's dynamics in the context of a recognised rule about the state of various other aspects of the world's dynamics, specifically including potential variations in states of communicators. For bits sent over the internet in something like HTML the other aspects are kept fixed. For the codes using ENIGMA machines the contextual rules varied in a respecified way. For natural language we have all sorts of unwritten rules for changing the gears. In brains I suspect that changing the gears is most of the encoding. Similar things apply inside Turing machines but in a much simpler way because there is only one serial path.

2016-12-13
RoboMary in free fall
Hi Jo,
I am fine with the location of the synapse being functionally important is based upon theoretical necessity, but I did not understand your question about "why the climbing fibres have to climb all the way up to where they attach". Is the dendrite on the Purkinje that the climbing fibre connects to not special (to facilitate the stronger signal), such that it has to connect to that dendrite which is at a certain place on the cell? 

Are you suggesting that the idea that neurons firing depends on the concentration of positive ions at the axon hillock reaching a certain threshold is wrong?

Regarding the point about cell B, yes I can understand that cell A would be communicating to more than one cell. Any one of them could have been considered to be cell B. My point was that cell A does not communicate what content it was experiencing to any subject. 

You write:
If the job of cell A is simply to send a 'yes' signal or not and the function of the yes signal is to cause a large number of cells to repeat firing if they did before, so that the same pattern of firing is repeated then we can reasonably say that the output of A is either 'yes, that pattern' or 'no, not that pattern'. There is in a sense only one Shannon bit but it is a bit that specifies one of two to the power 50,000 patterns.
Do such neurons exist? I was not thinking that even the climbing fibres signals fully determined the firing of their Purkinje cell targets. If you were just using the idea as example to highlight a concept then given that cell A would not have information as to what pattern the other cells were receiving, it is hard to see how its firing can be considered to be a comment about a specific dispersed pattern which cell A has not full information about, as opposed to a comment on which of one of two categories (the "yes" or "no" category) the specific dispersed pattern falls into. In the later case it would seem that the shannon bit that the cell A communicates would be encoding one of two possibilities as expected. 

Other than direct sensory signals, your theory seems to imply that all cellular communication is in terms of "yes" or "no".

It seems that you are claiming that the content of the "yes" or "no" signals would be determined from the synaptic locations and the firing patterns. 

If that is the case then presumably for the content to be appropriate the neuron would have to have the means to ensure the correct position of its synaptic locations. But

1) How could that be ensured without detailed information about the arrangement? 

and 

2) With the plasticity of the arrangement how could the appropriateness of the content be maintained?

and furthermore 

3) What evolutionary necessity would there be for the cell/subjects experience to be appropriate, given that none of the subjects report what content they are experiencing and instead just give out "yes" or "no" messages?

Yours sincerely, 

Glenn  

2016-12-14
RoboMary in free fall
Reply to Glenn Spigel
As I said, Glenn, to understand what I am saying you have to draw yourself some diagram, and they are not simple. However, anyone who has pondered on how short term memory works is likely to have done this exercise and come up with much the same answer. It is not difficult in comparison to the sorts of circuits computer programmers propose. The person who has actually published a whole model where the necessary feedback circuits are explicit is Arnold Trehub. I disagree with Arnold about where in the system qualia arise but his circuitry looks more or less identical to the one I worked out myself. 
A key point for me, mentioned in my Frontiers in Psychology article but explained in more depth in the Poeppel and Embick reply essay is that individual axon spikes have to mean propositions or commands. This does not arise in traditional external signals where strings can be used to form propositions such that logical inferences can be performed over large numbers of piecemeal integrations. In brains I am pretty sure that each integration step has to be a compete logical inference step so the inputs have to be propositional, as does the output. However, just as in natural language, if context is fixed externally the meaning of the proposition can be determined elsewhere, as in 'yes, repeat that pattern' where the pattern is stored in some short term reinforcement of connections between other cells. There must be circuits that do that for us to have short term memory.

There would clearly be a difference between the bit value of an incoming signal in terms of the number of different meanings it could have because it could 'point' to a wide range of patterns held elsewhere in the system and the bit value in terms of its contribution to the richness of the pattern of input to the cell actually receiving it. None of the B cells gets a rich pattern from A but if all the right cells re-fire A can get the same rich pattern it responded to before. This analysis works the same if like Arnold Trehub you do not place experience in individual cells, but in a 'distributed firing pattern' in fact. It is not different because of my premise of cells experiencing. 

The computational linguistics people seem to have got very stuck with this because they stick to traditional information theory without thinking how it would really be cashed out in a brain. So people like Fodor with his language of thought have spent years scratching their heads because they are wanting symbols in a brain like symbols in a natural or computer language with strings. Randy Gallistel has got stuck for the same reason.

The point about the climbing fibres is that in traditional summate and fire theory it does not matter where synapses are, as long as they contribute some depolarisation. The assumption is that there is a linear summation of all depolarising effects of openings of ion channels. You might think that it mattered how near the synapse was to the axon hillock but in the default theory by definition position is irrelevant. I have already mentioned people who have recently shown non-linearity in integration - Segev and Koch some years ago, more recently Tiesinga and Hausser's groups. There may well be a lot more literature since 2012 but I have not been keeping up with it.

2016-12-14
RoboMary in free fall
Hi Jo, 

I do not think drawing a diagram would help, because the questions are about how the cells would ensure that their synapse locations are in the correct place given the system organisation. What were you expecting me to do when drawing a diagram, just assume that they were? I will ask you the questions again, since you did not seem to even attempt to address them. If you do not understand them, then it would be useful if you mentioned it. Or if you think I have misunderstood you and made an incorrect assumption about your theory, it would be useful if you pointed it out. I had written:

Other than direct sensory signals, your theory seems to imply that all cellular communication is in terms of "yes" or "no".

It seems that you are claiming that the content of the "yes" or "no" signals would be determined from the synaptic locations and the firing patterns. 

If that is the case then presumably for the content to be appropriate the neuron would have to have the means to ensure the correct position of its synaptic locations. But

1) How could that be ensured without detailed information about the arrangement? 

and 

2) With the plasticity of the arrangement how could the appropriateness of the content be maintained?

and furthermore 

3) What evolutionary necessity would there be for the cell/subjects experience to be appropriate, given that none of the subjects report what content they are experiencing and instead just give out "yes" or "no" messages?

Regarding a separate issue you wrote:

The point about the climbing fibres is that in traditional summate and fire theory it does not matter where synapses are, as long as they contribute some depolarisation. The assumption is that there is a linear summation of all depolarising effects of openings of ion channels. You might think that it mattered how near the synapse was to the axon hillock but in the default theory by definition position is irrelevant. I have already mentioned people who have recently shown non-linearity in integration - Segev and Koch some years ago, more recently Tiesinga and Hausser's groups. There may well be a lot more literature since 2012 but I have not been keeping up with it.
But that did not seem to even attempt to answer the question that I asked:

Is the dendrite on the Purkinje that the climbing fibre connects to not special (to facilitate the stronger signal), such that it has to connect to that dendrite which is at a certain place on the cell? 
As previously mentioned, it is not the case that the climbing fibre gives a similar strength signal, but causes peculiar behaviour because of the signal location. If that had been the case then sure it would have been evidence that location was important. But that is not the case, and instead it gives a comparatively extremely strong signal, so its firing significantly weighting the result seems explainable by standard theory. You questioned why it connects where it does, but if the dendrite on the Purkinje cell that the climbing fibre connects to is special (to facilitate the stronger signal), then it would not be surprising if it was located at a certain place on the cell (analogous to human heads tending to be at certain places on human bodies). It would not even matter whether it was located on the same place in the cell, only that it was special. If special, it could simply release a chemical indicator of its location, and the climbing fibre could head towards the higher signal concentration.

Yours sincerely, 

Glenn  



2016-12-14
RoboMary in free fall
Reply to Glenn Spigel
I understand why you are asking these questions Glenn but the problem is that they do not even arise because they are predicated on assumptions that do not apply. 
The drawing of diagrams reveals how the internal deictic system has to work, quite independent of questions about where synapses attach on dendritic trees. What would a 'correct' position be? The position is what it is. And 'content' in the sense philosophers use will not do the job needed in this discussion unless it is carefully contextualised. That is presumably why Quine decided that there was no such thing as meaning to a subject. (Dan Dennett was a student of Quine at Harvard.) The meaning of a word, and especially the word meaning,  shifts every time you shift a context, including which subject the meaning is to. And although meaning by and meaning to seem to follow the same rules that is only because the brain is constructed in such a way that the two match up as well as possible.
As far as I cam concerned it is completely impossible to understand the issue we are trying to discuss unless you have the sort of diagram Arnold has produced in mind. You end up going in circles of the sort Wittgenstein complained of. The way qualia correspond to physical dynamics is a biological problem and in order to address a biological problem one has to have some idea of the complexity of the biology involved.


2016-12-15
RoboMary in free fall
Hi Jo, 
I am slightly surprised that you now seem to not know what I mean by content, or are wondering what I mean by correct synaptic locations. We have been discussing content at least since 25/11/2016. In post  https://philpapers.org/post/24146 you wrote:

As to what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials.
When I enquire about how the neurons could  "ensure the correct position of its synaptic locations", I mean how do they ensure that "the interrelationships between the spacial location of the relevant post synaptic potentials" would give rise to appropriate content. Presumably if the dendrites had been in different locations the interrelationships would be different. 

Also you started off your reply:

I understand why you are asking these questions Glenn but the problem is that they do not even arise because they are predicated on assumptions that do not apply. 


Though as I mentioned the previous post (emphasis added):

I will ask you the questions again, since you did not seem to even attempt to address them. If you do not understand them, then it would be useful if you mentioned it. Or if you think I have misunderstood you and made an incorrect assumption about your theory, it would be useful if you pointed it out. 
But you have not pointed out what assumption you are suggesting I made which is incorrect Anyway, now hopefully you can understand what I was asking you about (since it is related back you what you yourself have written), I will ask you the questions for a third time, they include the assumptions so feel free to point out which are incorrect and explain why:

Other than direct sensory signals, your theory seems to imply that all cellular communication is in terms of "yes" or "no".

It seems that you are claiming that the content of the "yes" or "no" signals would be determined from the synaptic locations and the firing patterns. 

If that is the case then presumably for the content to be appropriate the neuron would have to have the means to ensure the correct position of its synaptic locations. But

1) How could that be ensured without detailed information about the arrangement? 

and 

2) With the plasticity of the arrangement how could the appropriateness of the content be maintained?

and furthermore 

3) What evolutionary necessity would there be for the cell/subjects experience to be appropriate, given that none of the subjects report what content they are experiencing and instead just give out "yes" or "no" messages?


And regarding Marr's analysis of the Purkinje cells, again you did not answer the question.

Is the dendrite on the Purkinje cell that the climbing fibre connects to different from he dendrites the parallel fibres connect to (to facilitate the stronger signal)?

Yours sincerely, 

Glenn

2016-12-16
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,All I can say is that I think the answers to all your questions are in my previous posts but I realise you are very likely not to see that if you are not going to go through the process of drawing or studying the sort of diagrams that make clear what the problems are. There is no short cut. Words on their own are not much use. The contribution of a word like content to a sentence may be reasonably unambiguous (for instance if the question is whether it relates to colours or sounds) but in another sentence its contribution may be entirely opaque because one wants to distinguish various classes of content. You will of course be familiar with concepts of broad and narrow content but I was trying to put across in recent posts that the problem is immensely greater than that, such that there is no single sense in which a signal carries content - there may be a more or less infinite number of senses. As long as one has a grasp of what Arnold's diagrams imply that is tractable but without it one goes round nd round in circles.

As indicated before, it would help me to know why you are actually interested in any of these questions. I am interested because I want to identify plausible dynamic models. So for me discussion is only productive if the detail of the models is in frame. I have no idea what models you think might be plausible or whether you simply enjoy arguing!

2016-12-18
RoboMary in free fall
Hi Jo, 
I do not need to draw diagrams to help me see problems with your account, I am raising some anyway (perhaps the same, perhaps not). If you have already answered them, then please do so again, as it seems for three times in a row you avoided answering the questions about how you ensure the content is appropriate in your theory. Because we are discussing your theory it is obvious that we are not talking about broad content, because you claim that the neuron subject would experience the same even if it was a neuron in a vat. So if you could just answer the questions regarding the content and the synaptic locations using the same meaning for content that you used when you wrote (emphasis added):

As to what would determine if the content involved colours or sounds, that would be a very interesting question and presumably it would have to do with the interrelationships between the spatial location of the relevant post synaptic potentials.
Before you had claimed that you did not answer them because I had made incorrect assumptions, and in my last two posts I had asked you if to point out what the incorrect assumptions were if I did make them, but yet again you failed to do so. Now it seems that you are saying that I need to be aware of some diagrams to get the answers to the questions that you claim to have already answered without diagrams. I can see no reason to need diagrams to understand, though perhaps you could answer them and provide a link to the type of diagrams you think would be necessary, or are you just going to continue to avoid answering them?

You also for the second time in a row you did not answer the question about whether the dendrite on the Purkinje cell that the climbing fibre connects to different from he dendrites the parallel fibres connect to (to facilitate the stronger signal). Are you claiming that you have answered that one before also? I presume you are not suggesting that without a diagram I could not comprehend any written answer.

Not that I think it is relevant to your answers but the reason I am asking the questions is to investigate the plausibility of your theory. I would have presumed you would be interested in any potential problems with it. Though another problem which I had brought up earlier but which I moved off of as I tried to understand your position more fully, which my last mention of was I think in post https://philpapers.org/post/22130 . Though for now I would rather not go back to that until we have finished with these questions, as these are to do with me understanding your position.

I assume you are not going to even attempt to answer the questions I have been repeatedly asking you and you have repeatedly not answered, but hope to be pleasantly surprised. 

Yours sincerely, 

Glenn

2016-12-18
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,Most of what I have been saying is not about my theory, just general arguments about the place of meaning in neural interactions. But if you want to understand my theory I think you will need to be familiar with relevant diagrams. I had to work on the diagrams for many months before I understood my theory. If you are learning computer programming you do not get a feel for how logical operations work until you have practiced programming for quite a long time.

I think maybe the key sticking point is that you are asking what ensures 'appropriate' content. But what does that mean? Are the letter d and o and g appropriate for canine pets? Or is chien more appropriate? It seems a bit like asking if the wheels of a Mercedes Benz are appropriate to a Mercedes Benz. Presumably they must be. You seem to have some concept of content being 'the right content', but I think this is a category mistake. Our sense of space is not 'like space' or our sense of time 'like time' any more than our sense of red is like the cluster of dispositional properties in the world that give us the sense of red. Can you give me any idea what 'appropriate' might mean?

2016-12-19
RoboMary in free fall
Hi Jo, 
Well I was not pleasantly surprised. Are you ever going to answer the question: Is the dendrite on the Purkinje cell that the climbing fibre connects to different from he dendrites the parallel fibres connect to (to facilitate the stronger signal)?

Regarding content, if you want an example, then consider that in your theory you the subject that is experiencing what you are experiencing is a neuron. You presumably believe that when you experience typing on a computer, that the human is typing on a computer, that when you experience someone talk that the human is detecting soundwaves caused by someone talking etc. In other words what the content (the computer, any sounds you might be hearing, any other stuff you might be seeing or feeling etc.) that you as a neuron is experiencing is appropriate to what is happening to the human. 

So here are some further assumptions and the questions again. Like the question about the Purkinje cell you have repeatedly avoided answering, and the reasons for you not answering change. Now with the concrete example, such that the vast majority of people reading this I think will understand what I am asking and I see no reason why you should be different, so hopefully you will now end up answering, or state that you have no intention of answering, and save me wasting my time repeatedly asking.

Other than direct sensory signals, your theory seems to imply that all neuron communication is in terms of "yes" or "no".

It seems that you are claiming that the content of the "yes" or "no" signals would be determined from the synaptic locations and the firing patterns. 

If that is the case then presumably for the content to be appropriate the neuron would have to have the means to ensure the correct position of its synaptic locations. But

1) How could that be ensured without detailed information about the arrangement? 

and 

2) With the plasticity of the arrangement how could the appropriateness of the content be maintained?

and furthermore 

3) What evolutionary necessity would there be for the cell/subjects experience to be appropriate, given that none of the subjects report what content they are experiencing and instead just give out "yes" or "no" messages?

You can answer the questions in terms of the cell that is experiencing what you are experiencing (in your theory). You not answering has gone on for a week now. Also as I mentioned if you think it is useful, then include a link to whatever diagrams you feel would help with your answers.  

Yours sincerely, 

Glenn

2016-12-19
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,There does not seem much point in my trying to reanswser questions when you seem to have a block with this issue of 'appropriate content'. If we are talking about 'extension' then the neurones experience of being inside person typing on a computer will be about that state by dint of the connectivity of the brain and the dispositional properties of the network. We are agreed that we are more interested in intensional content but as far as I am aware we know of no particular requirement for what the experience of typing on a computer should be like, other than that is should mediate appropriate responses as in the extensional analysis. In what sense are you thinking it needs to be 'appropriate'?

As indicated before, what we know of brain development suggests that the brain learns through selection rather than instruction, as the immune system does. The implication of that is that the nerves are set up such that at least some of them are connected in such a way that they will recognise typing on a computer and mediate relevant responses. When we learn about typing and computers these cells get recruited for use. If they do the job well then whatever those or other cells experience will obviously do the job well.

I agree that in my own model there would be constraints on what positions for synapses would mediate efficient responses to sensations relating to typing on computers, dictated by whatever pattern based computational rules governed the integration process. However, as yet there is no way of knowing what those rules might be so no specific predictions can be made. The closest we have to anything like that are probably the studies of Koch and Segev on giant locust neurons that mediate a response to an approaching object. So I think the positioning of synapses very likely needs to be appropriate but I still do not understand what it would be for the experience to be appropriate. The outside world is not like our experiences in any meaningful sense so there is no similarity on which to base appropriateness.

2016-12-20
RoboMary in free fall
Hi Jo, 
You would not be re-answering questions, you have not answered them. I do not have a block with appropriate content either, you seem to be digging deep in order to find a difficulty with a simple concept. So let me review a few things for you. 

You have stated that the experiential content would be the same for the neuron subject that in your theory you are, that is experiencing what you are experiencing whether the neuron was in a vat or in the brain as long as the dendrite synapses were stimulated in the same fashion. 

You also consider the experiential content to be evidence about what is happening outside of the human, rather than evidence of the behaviour of neighbouring neurons in relation to you (the neuron that is experiencing what you are experiencing).

For the experience to be appropriate, such that it acts as evidence about what is happening outside of the human, it needs to reflect that. So if the human was typing on a computer, then you having an experience of sitting by a lake watching flamingos would not be appropriate, neither would flashes of light reflecting the intensity and spacial relationships of the dendrite signals, neither would that in sound. What would be appropriate would be experiential content that maps to what the human's description of what it is consciously experiencing. 

Are you still struggling with understanding this, or does that clear it up for you?

Yours sincerely, 

Glenn

2016-12-20
RoboMary in free fall
Reply to Glenn Spigel
Now I see you really have not understood, Glenn. I am sorry if I cannot help further but I suspect I cannot. I careful reading of Elizabeth Anscombe might help but I doubt it. We have no reason to think experience is 'appropriate' in any sense other than being appropriate to the computational task involved in post-synaptic integration. There would be no problem with two cells being identical in structure and function in a brain and for one to use a particular conformation for flamingos that the other used for computers. There is simply no difficulty there. The cell in the vat comparison is across a different sort of difference so non sequitur. I think I have probably said all I can. I have indeed answered all your questions many times, but I can fully see why that is not apparent. Until you have understood the diagrams I see little chance of you seeing what I mean.

2016-12-21
RoboMary in free fall
Hi Jo, 
You wrote:

We have no reason to think experience is 'appropriate' in any sense other than being appropriate to the computational task involved in post-synaptic integration. 


Were you not basing your theory that you (the subject that is having the conscious experience that you are having) are a neuron based upon your conscious experiences of a world with human inhabitants which have brains which have neurons in them and some reasoning about those conscious experiences? Because if you were, then have you not assumed that there were physical counterparts to the objects of your experience, and based you theory upon that assumption (you assume physical humans with brains which have neurons exist for example)? 
You wrote:

There would be no problem with two cells being identical in structure and function in a brain and for one to use a particular conformation for flamingos that the other used for computers. There is simply no difficulty there.

Are you stating that in theory you could have two cells in a brain, with identical structure, receiving identical patterns of dendrite synaptic stimulation and for them to differ in what they experienced?  I assumed you were not suggesting that, but if you are, then you are right I have misunderstood you. 

If not, I do not even understand why you wrote it, there was no suggestion in what I wrote that if the dendrite synaptic stimulation in two neurons with identical structure was different then the experience could not be different. I had assumed that the experience of a neuron would change if the synaptic stimulation changed. So if you thought such a comment was relevant to what I wrote then perhaps you should read what I wrote again as maybe you misunderstood (assuming you had not just decided to write irrelevant comments). 

Also you keep mentioning the diagrams, and as I have pointed out numerous times now, if you think they will help, then supply a link to them. 

Yours sincerely, 

Glenn



2016-12-22
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
I am afraid that your category mistake is sitting there staring out from your last post. The sentence I wrote about a cell using the same dynamic relations for flamingos that another might use for computers does not require that those cells have different experiences, even if in principle one could make such a statement. I think you must be stuck in some form of naive realism.

You talk of a world of physical things but to a physicists 'physical' just means involving dynamic relations. Yes, I assume there are outside dynamic relations that are represented by inside dynamic relations but there is no suggestion that the representation would be a similar relation to what it represents. Its job is to predict future dynamic relations from inferred dispositional patterns. Its dynamics have to do with the work of inferences, not the flight of flamingos or the glowing of computer screen. Appropriateness is appropriateness to the job in hand, not to some illusory 'likeness' of the world.

I have already suggested you look at Arnold Trehub's diagrams. I assume that you realise that all you need to do is Google Arnold Trehub (being an unusual name). You will pull out 'The Cognitive Brain' which is well worth a read. (I was not aware that you had ever pointed out numerous times that you needed help finding the link!) 

Triggered by our discussion I got in touch with the leading world authority on the specificity of determination of synapse positions in dendritic trees He very kindly pointed me to a review, which confirms that synapses do indeed connect to specific areas, although I am not sure that even now we have evidence that clearly separates the default summate and fire model from pattern based models.

The review is at :Annu. Rev. Cell Dev. Biol. 2009.25:161-195

2016-12-22
RoboMary in free fall
It might help to add that if a cell, or more precisely a dynamic unit with the domain of a cell, has the sort of experiences we talk about then will be in a sense in a 'private language'. Since we have no reason to think that what an experience is like to A is ever transmitted to B there is no problem with cells using the same experiences for quite different purposes or different experiences for the same purpose. The only comparisons of what things are like will be comparisons within the phenomenal content available to one cell  which being private is never subject to comparisons with experiences in other cells that might yield incongruity. 

2016-12-23
RoboMary in free fall
Hi Jo, 
I can see the The Cognitive Brain reference, but you did not mention which of the 16 + pdfs the diagrams you seem so keen on are in. Is it problematic to just supply a link?

In post  https://philpapers.org/post/25230

You wrote:

You talk of a world of physical things but to a physicists 'physical' just means involving dynamic relations. Yes, I assume there are outside dynamic relations that are represented by inside dynamic relations but there is no suggestion that the representation would be a similar relation to what it represents. Its job is to predict future dynamic relations from inferred dispositional patterns. Its dynamics have to do with the work of inferences, not the flight of flamingos or the glowing of computer screen. Appropriateness is appropriateness to the job in hand, not to some illusory 'likeness' of the world.


Which seems to suggest that you do consider that dynamic relations of forces which you think make up humans exist, based upon the conscious experience of humans that the neuron you experience being experiences.  But see below, as you write something that brings this into question. 

Also when you write "Its job is to predict future dynamic relations from inferred dispositional patterns", I do not know what you are referring to, by "Its". Are you referring to the arrangement of neurons, or a single neuron? I also I do not know what you mean by "inferred dispositional patterns", because if you were referring to a neuron in a vat, I am not sure what would be inferred by the pattern, or what dispositional would refer to other than whether the signals were strong enough (or in the write locations) to cause the neuron to fire. 

You also wrote:

Triggered by our discussion I got in touch with the leading world authority on the specificity of determination of synapse positions in dendritic trees He very kindly pointed me to a review, which confirms that synapses do indeed connect to specific areas, although I am not sure that even now we have evidence that clearly separates the default summate and fire model from pattern based models.


I do not know what part of our discussion you thought this related to, were you thinking it was perhaps an answer to the, is the dendrite on the Purkinje cell that the climbing fibre connects to different from he dendrites the parallel fibres connect to (to facilitate the stronger signal), question that you have repeatedly avoided answering? If so then if you could give a direct answer that would be useful.

Also when you state that you are not sure that even now you have evidence that clearly separates the default summate and fire model from pattern based models, do you mean anything other than there is no evidence for the pattern based models that you require for your theory, and that all evidence can be explained by the number and strength of signals as though the dendrite location was pretty much irrelevant?

You wrote in the post I am reply to:

Since we have no reason to think that what an experience is like to A is ever transmitted to B there is no problem with cells using the same experiences for quite different purposes or different experiences for the same purpose. 


Are you suggesting that the same inside dynamic relations would represent different outside dynamic relations, or just that the inside dynamic relations which represent the same outside dynamic relations can be used for the same purpose? 

When you suggest that different experiences can be used for the same purposes, are you suggesting that there will be some pre-established harmony that ensures that the same reaction to different experiences, or perhaps that there will be a correlation between how one neuron reacts to certain experiences, and how another will react to a different experience?

Yours sincerely,

Glenn


2016-12-24
RoboMary in free fall
Reply to Glenn Spigel
I don't keep a link to the Cognitive Brain. I read the book. If it is all online then that seems fine. You have to read enough of it to see what the diagrams are about. There would be no point in just peering at the diagrams themselves. 
Have a good Christmas.

2017-01-03
RoboMary in free fall
Hi Jo, 

Thanks for the link to the book, but I am not planning on looking through the 16+ pdfs to find diagrams that I do not see the relevance of, since I cannot see how the questions I have asked need a diagram to explain. You had stated that I had made some incorrect assumptions, but I had listed my assumptions and repeatedly asked you to point out which assumptions I had made which were incorrect, and repeatedly you had not pointed out any incorrect assumption. Also one question I had been asking you numerous times was:

Is the dendrite on the Purkinje cell that the climbing fibre connects to different from he dendrites the parallel fibres connect to (to facilitate the stronger signal)?

and you did not answer, and it was a simple "yes" or "no" answer. Once or twice perhaps it was an oversight, but so many times, it seemed obvious to me you were intentionally avoiding answering, and it does not require a diagram to give a yes and no answer. If the dendrite was different then it would explain why they connect where they do, and the extra strength signal that they get would explain the difference they make to the firing, as an introductory video such as this:

https://www.youtube.com/watch?v=5031rWXgdYo&t=1947s

points out that the standard basic understanding a positive ion threshold at the Axon Hillock needs to be reached. So it was not quite like you claimed in https://philpapers.org/post/24818 where you wrote:

The default model, known as integrate (really summate) and fire, holds that it does not matter which synapse is used for an input - the cell just adds up the number of puts and fires when it is above a threshold. 

If it had of been the number of firings, and the firing strength was not taken into account, then maybe the Purkinje cell would have provided some evidence for your claim that the location was particularly important, as that could have been an explanation of what was significant about the climbing fibre firing, and supported your claim regarding Marr's analysis of the Purkinje cells:

... it makes the point that the default model almost certainly is wrong in some cases. 

But you have not shown that the Marr's analysis even hints that the default model as shown in the video (where it is the positive ion concentration that is significant) is wrong in any case. If I had not checked up what you were claiming and seen the video and the standard model it explained, I would have relied on your description of the standard model, and not have realised that Marr's analysis is not any evidence for what you were claiming, let alone being the "best clue" that what you were stating was correct. 

Regarding your theory, I will give a brief outline, and just point out the main problems that I can see with it, and then I think I will end the conversation, as this does not seem to be going anywhere, and I do not wish to spend any more time on it, though I do appreciate you putting forward your theory so I could check whether it was something I needed to take into account.

Firstly your theory about reality is that there are forces which are involved in dynamic relations. Any existing "physical thing" can be reduced to the dynamic relations of these forces. This is not to suggest that the whole universe should not be considered as a continuum of dynamic relations of forces. So when I talk of an existing human, I am not assuming that there is not a continuum of dynamic relations of forces between the human and other objects or that any object cannot be reduced to dynamic relations of forces. So just because I use those object type abstractions, it is not that I have misunderstood you.

Secondly your theory suggests that the subject that is experiencing what you are experiencing is a neuron.

1) The first problem with your theory is that you have stated in https://philpapers.org/post/22018:

I don't think anyone is suggesting that 'a feature of what it is like is influencing'. As I see it we are interested in what it is like to be influenced. (So the spin of the quark may affect what it is like for some other dynamic unit to be near the quark, not what it is like to be the quark - that would not make sense to me.) Being like something is not some new dynamic power or influence - I take it to be what we treat it as - what it is like to be proximal to a physical dynamic influence. 

The problem as I had pointed out in https://philpapers.org/post/21946:

I can tell that reality is not a zombie universe, and I base my knowledge of that on the evidence of my conscious experience. I stated that I could conclude from that that what I consciously experience is having an influence on me 

The human is reporting that it has knowledge that it is not a zombie universe, and the only way it could know that is if the conscious experience influenced what it knew, and if it did, then it would be an influence, contrary to your theory. In your theory, there are dynamic relations which it will be like something to be, but it being like something to be them is not itself an influence, but an epiphenomenal property.

Even if you were to suggest that what it was like was an influence to the behaviour of the dynamic relations, there would be the problem that I pointed out in post  https://philpapers.org/post/21974 :

Regarding the pansychic (sic) view, unless it is stated that some of those fundamental variables refer to features of what-it-is-like to be the underlying, the problem would be the same as that described above. If they are considered to refer to features of what-it-is-like to be the constituent, then the direct influences can be reduced to those what-it-is-like features, and what-it-is-like to be you is not one of those features. Since those directly influential features would be found in much simpler forms. This is not to assume that what-it-is-like to be you could not arise through some behaviour (though I have another argument regarding the implausibility of the neural states having the symbolism that they do if reality was a physicalist one. If you would be interested, I could maybe post it on a different thread). It is simply that what-it-is-like to be you would not be a feature that is a direct influence. The influences on behaviour could be reduced to the direct influences represented by the fundamental variables in the equation used to describe it, none of which refer to what-it-is-like to be you. You could break the equation down to understand the features that were considered to be a direct influence on behaviour (such as what-it-was-like to be a quark with a spin of 1/2).

2) The second problem is that you are claiming that what you consciously experience is evidence of what is external to the system the neuron is embedded in. I am not assuming you are a naive realist. My point is that you do not seem to have understood that there is no inherent correlation between firings in a neuron which would could be imagined to be situated in numerous systems and what is going on outside the systems. The correlation would vary depending on which system it was embedded in. Which was one of the realisations behind the questions:

Other than direct sensory signals, your theory seems to imply that all neuron communication is in terms of "yes" or "no".

It seems that you are claiming that the content of the "yes" or "no" signals would be determined from the synaptic locations and the firing patterns. 

If that is the case then presumably for the content to be appropriate the neuron would have to have the means to ensure the correct position of its synaptic locations. But

1) How could that be ensured without detailed information about the arrangement? 

and 

2) With the plasticity of the arrangement how could the appropriateness of the content be maintained?

and furthermore
 
3) What evolutionary necessity would there be for the cell/subjects experience to be appropriate, given that none of the subjects report what content they are experiencing and instead just give out "yes" or "no" messages?

That I was asking in perhaps slightly different forms in posts  https://philpapers.org/post/24974,
 https://philpapers.org/post/24998, https://philpapers.org/post/25026, and  https://philpapers.org/post/25150 but never received an answer to. 

There are no diagrams that will show that (that the correlation between the neuron's firings and what is outside the system it is embedded in would be dependent on the system it is embedded in and the neuron has no information regarding what system it is embedded in) not to be the case, so I am not planning on reading a book to find diagrams that I know will not address the issue, only then to find you had not understood the issue that I was pointing out. Though if you supplied the link, then I could read around it to make sure I understand the diagrams. 

Anyway, I hope you had enjoyed Christmas, and have a happy New Year, and while I would be interested in reading any response you might give, I think we could stop the conversation then, as perhaps it would be useful to consider over some time what the other was stating. Thanks for the time you put in. 

Yours sincerely, 

Glenn


2017-01-03
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
Of course 'the correlation between the neuron's firings and what is outside the system it is embedded in would be dependent on the system it is embedded in and the neuron has no information regarding what system it is embedded in'. The diagrams are not intended to deny what must be so. I fear you have missed the point again.

Happy New Year


2017-01-11
RoboMary in free fall
Hi Jo, 
I was going to leave the conversation to give us both time to consider the other's opinion, but you have left me nothing else to consider, and I doubt you will consider what I had written. But I will make a further attempt.

Since the correlation between the neuron's firings and what is outside the system it is embedded in would be dependent upon what system it is embedded in, and what is going on outside that system, the neuron has no information about what is going on outside the system, since its firings are determined by those two factors. So if you consider the result of function x to represent what is going on outside the system, and the result of function y to represent the state of the system the neuron is embedded in and function z to be the firings of a particular neuron subject, all of these functions taking time as a parameter, reflecting that the results change with time, then that could written as

x(t) + y(t) ⇒ z(t)

but knowing z(t) gives no knowledge of x(t) since z(t) !⇒ x(t) + y(t). I have left out considerations that that it would take time for a signal from outside the system to reach the neuron, though if you wish to adjust the equation to reflect that then fine. Likewise, if you wish to adjust the equation to indicate a period of time instead. It doesn't affect the point which is that you suggest that the experience of the neuron subjects firings gives evidence of what is going on outside of the neuron, but the firings are not evidence of either what is going on outside the system, or what anything about the system, so how in your theory could the experience be evidence of either (such as the existence of a neural system, or its embodiment in a human)? 

Furthermore consider a neuron subject which receives a certain group of firings, which is experienced as experience A, what difference would it make in your story if it had been experienced as experience B? For example consider what you are experiencing now to be experience A, but supposing those same firings had instead been an experience involving no thoughts but just reflecting the dynamic relations occurring in the neuron, for example some visual imagery the neuron. How do you explain knowing and the human being able to report which of those two possibilities you were experiencing? You had mentioned that what you were experiencing was evidence, but I was not clear whether you were considering it to be evidence that you could react to.

Yours sincerely, 

Glenn

P.S. There is no assumption of niave realism, just an assumption that you were thinking that corresponding to the objects you consciously experience were objects that could be reduced to dynamic relations of forces. And just to make the equation I supplied clear, they are just to indicate that the firings of the neuronal subject would not provide evidence of the system arrangement it was embedded in, or how many "objects" the system was receiving inputs from. 

2017-01-11
RoboMary in free fall
Reply to Glenn Spigel
Dear Glenn,
The concept of knowledge you are assuming here is a pseudo concept that does not and cannot exist. I cover this in my essay on my website, Reality, Meaning and Knowledge (that is why knowledge is in the title). We assume we know what we mean by knowledge but there cannot be such a thing. What there is instead is counterintuitive but does the work needed.

All that matters is that the input to the neuron co-varies with some dynamic pattern in the world according to rules that are useful to the way the brain works. The sensed meaning of the pattern as 'space' or 'colour' or 'belonging' or 'mine' or whatever need not bear any relation to anything 'similar' outside. Descartes understood this very well. People have got muddled since.

2017-01-12
RoboMary in free fall
Hi Jo, 
Well I mentioned the assumption I was making:

There is no assumption of niave realism, just an assumption that you were thinking that corresponding to the objects you consciously experience were objects that could be reduced to dynamic relations of forces. And just to make the equation I supplied clear, they are just to indicate that the firings of the neuronal subject would not provide evidence of the system arrangement it was embedded in, or how many "objects" the system was receiving inputs from. 

Are you suggesting your theory about the subject experiencing what you are experiencing being a neuron in a human brain makes no assumption that corresponding to the objects that you experience are objects that could be reduced to dynamic relations of forces?  So for example, were you not assuming that corresponding to your experience of using a computer to communicate with me, there existed a computer (that is reducible to dynamic relations of forces), or that if you were to experience brain surgery that there was a human with a brain that contained neurons? If you are claiming that you did not make any such assumption, then could you explain what you were basing the existence of neurons on?

Also I mentioned:

Furthermore consider a neuron subject which receives a certain group of firings, which is experienced as experience A, what difference would it make in your story if it had been experienced as experience B? For example consider what you are experiencing now to be experience A, but supposing those same firings had instead been an experience involving no thoughts but just reflecting the dynamic relations occurring in the neuron, for example some visual imagery the neuron. How do you explain knowing and the human being able to report which of those two possibilities you were experiencing? You had mentioned that what you were experiencing was evidence, but I was not clear whether you were considering it to be evidence that you could react to.
You did not seem to answer, unless you were claiming that the context the word "knowing" was used here indicates it to be a pseudo concept. If so I do not see how it can be, but let me rephrase, how can you tell which of those scenarios is the reality of the situation (that you are having experience A or that you are experiencing experience B), or are you claiming you cannot?

Yours sincerely, 

Glenn

2017-01-12
RoboMary in free fall
Reply to Glenn Spigel
As I said, Glenn, knowing in the sense most people use it is a pseudo concept. Please read what I have written on this if you are interested in my view.

2017-01-12
RoboMary in free fall
Hi Jo, 
I asked some questions about two specific issues, and I brought up the "knowing" issue, but reworded so the concept of "knowing" was no longer used, In your response though you seem to have tried to resurrect the "knowing" issue as though it was integral to the questions, even though it is not mentioned and so was not relevant to them.
We can come back to the "knowing" issue if you feel like it is relevant later, but in the meantime, are you going to answer the questions (which do not entail the concept of "knowing"). or would you prefer not to?

Yours sincerely, 

Glenn