From PhilPapers forum Philosophy of Mind:

2009-09-02
Is Functionalism impossible?

Dear Jonathan,

I think that the confusion I was referring to, regarding different definitions of functionalism, is pretty much universal :)

"You say that there is no conflict between multiple realizability and emphasis on internal action but do not explain how this can be if multiple realizability holds that it does not matter what the internal action is as long as it has the right effect."

Multiple realizability, as I use the term, just means that there are multiple types of physical architectures in which the same (computational, functional, and mental) states could arise.

I am not trying to shift goal posts.  I admit that, having reviewed some more references, it's clear that some forms of functionalism are not compatible with computationalism, especially those that use externalism.  But, I think, other approaches that are called functionalism are compatible with it.

"You now suggest that mental content depends on structure and transition relations. I see that these may determine the range of possible experiences but we are interested in what determines a particular experience in the context of particular internal events?"

The particular computational states that occur.

"Then you say that the structure is only relevant in an abstract sense - ie in a selected (abstracted) interpretation. But, as William Seager pointed out in JCS in 1995, the entity with the mental state will have no way of knowing which of a more or less infinite number of possible selected interpretations of its internal state is to be followed."

Please clarify what you mean.  Why do you think that's a problem?

Anyway, I think an example will help.  For this example I will consider relatively simple computations.  Presumably, conscious computations would have to be much more complicated, but it should show the idea.  So let us pretend that these computations can be analyzed using theories of mind.

The system takes a whole number N (within the range 1 to N_max) for input, it computes for a while, then it outputs the Nth prime number.

Suppose that it can use one of two algorithms. Perhaps #1 is more memory efficient than #2 but let us suppose they take the same amount of time to run, so that an outside observer can't tell which one is being used.

First, it's clear that these abstract specifications are multiply realizable.  What is relevant is the structure (the set of variables specified by each algorithm must be reflected by using physical memory elements or such), the transition relations (such that the algorithms are reliably implemented), and the particular computational state (determined by the input and the number of time steps that have passed).  Computationalism is explicit about that.  Functionalism may need to be  supplemented by those requirements.

But since the input-output relations are the same for both algorithms, is functionalism per se committed to saying that any mental states they give rise to must be the same?  I would say it is not, because the internal states are different.  We can always regard the next internal state as the output to the current one.

"And none of this shifting around of jargon addresses my detailed arguments about the problem of the final output step and my original point that nothing can be informed of, and thereby experience, its functional role because it cannot be informed of its effects. Experiencing things without being informed of them is usually called clairvoyance or telepathy and is inconsistent with causality."

The functional role, here, is specified by which algorithm is being run and by what the current state is.  There is no way for the system to know, in the intermediate stages, what the output will be when the run is finished, but also no need to know that - just the fact that the algorithm is what it is is enough.

"The obvious alternative is that experience is determined only by the content and mode of input to an experiencing entity - which is more in line with common sense anyway."

That alternative, considering only input but not output, is no better as far as I can tell.  It runs right into the swampman example - swampman had none of the usual input, but his consciousness is surely the same as that of a regular person.

Computationalism has no problem with swampman - he begins implementing the algorithm at an intermediate stage, that's all.

Some functionalists would hold that swampman's consciousness is different, based on external functioning.  I would say that your argument might have traction against those functionalists, but I never saw any reason to take their externalism seriously anyway.

I would like to see some people who support functionalism per se respond to this and clarify just what functionalism means to them.

Regards,

Jack