From PhilPapers forum Philosophy of Mind:

2016-10-03
RoboMary in free fall

In foot note 3 of Daniel Dennett's  paper "What RoboMary Knows" https://ase.tufts.edu/cogstud/dennett/papers/RoboMaryfinal.htm, Dennett notes:

---

Robinson (1993) also claims that I beg the question by not honouring a distinction he declares to exist between knowing "what one would say and how one would react" and knowing "what it is like."  If there is such a distinction, it has not yet been articulated and defended, by Robinson or anybody else, so far as I know.  If Mary knows everything about what she would say and how she would react, it is far from clear that she wouldn't know what it would be like. 

---

In the paper Dennett imagines RoboMary as follows:

"1.RoboMary is a standard  Mark 19 robot, except that she was brought on line without colour vision; her video cameras are black and white, but everything else in her hardware is equipped for colour vision, which is standard in the Mark 19."

Dennett then, it seems to me, considers that RoboMary would consciously experience red when in a similar situation to us experiencing red etc. At the very least, from his response to Robinson, it is clear that he is claiming that it has not been shown that if you know what it would say and how it would react, you would know what it was like for it.  Dennett considers the following objection to his thought experiment:

"Robots don't have colour experiences!  Robots don't have qualia. This scenario isn't remotely on the same topic as the story of Mary the colour scientist."

And gives the following response:

"I suspect that many will want to endorse this objection, but they really must restrain themselves, on pain of begging the question most blatantly. Contemporary materialism-at least in my version of it-cheerfully endorses the assertion that we are robots of a sort-made of robots made of robots. Thinking in terms of robots is a useful exercise, since it removes the excuse that we don't yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgement. If materialism is true, it should be possible ("in principle!") to build a material thing-call it a robot brain-that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it, and just illustrating that assumption in their version of the Mary story.  That might be interesting as social anthropology, but is unlikely to shed any light on the science of consciousness."

Here one might straight away claim that there is a distinction between knowing how a robot will behave and knowing whose theory was correct regarding robots. Two people could know how the robot would behave, but disagree about the correct theory regarding consciousness. You could think job done, why bother continuing. But one can go further.

Let us imagine that for each camera pixel the Mark 19's eye sockets have three 8-bit channels A, B and C which are used for the light intensity encodings.  For the grey scale camera the A, B and C channel values will all be the same. But with the colour cameras what they will be will depend on the version. With RGB cameras channel A will transmit the encoded red intensity, channel B the encoded green intensity, and channel C the encoded blue intensity, but with BRG cameras channel A will transmit the blue intensity, channel B the red intensity, and channel C the green intensity.

Now consider three Mark 19 robots. Each of which is in a different brightly lit room, sitting in a chair, with all of its motors disabled, so it is unable to move any body parts including its cameras.

The first is in a white room with a red cube which its RGB cameras are looking at. These cameras are slightly unusual as they also wirelessly broadcast their signal.

The second is in a white room with a blue cube which its BRG cameras are looking at. These cameras are also slightly unusual as they also wirelessly broadcast their signal.

The third is in a room with no box, but what is plugged into its camera sockets is a receiver that switches between picking up the signals broadcast from the cameras in the first two rooms.

The processing would be the same in each case, as in each case the channel values for the box pixels (assuming no shading) would be channel A = 255, channel B = 0, channel C = 0. There seems to me to be no way for Dennett (or any other physicalist philosopher for that matter), being able to establish whether the Mark19 in the third room's experience of a box was closer to how they (the philosopher) consciously experiences a red or whether it was closer to how they would consciously experience a blue box.  If any philosopher disagrees, then I for one would be interested in how they thought they could tell.  If not, then there is another example of a distinction between knowing how something will behave, and knowing what it would be like (if it was thought to like anything at all) for a robot. 

"Knock-down refutations are rare in philosophy, and unambiguous self-refutations are even rarer, for obvious reasons, but sometimes we get lucky. Sometimes philosophers clutch an insupportable hypothesis to their bosoms and run headlong over the cliff edge. Then, like cartoon characters, they hang there in mid-air, until they notice what they have done and gravity takes over."

-Daniel Dennett