Back    All discussions

2009-08-08
Is Functionalism impossible?
I have a question about functionalism that may have been covered fully by e.g. Block or others but not as far as I have discovered. Is it impossible by definition? I would value opinions.
Functionalism is, at least in some places, defined as the view that 'mental states are constituted solely by their functional role'. Functional role appears to imply an input-output relation in the context of the rest of the world. This role appears to be something definable in computational, logical or other abstract relational terms and thus 'multiply realisable' in physical causal chains. 


If I take the example of the mental state of being in pain Mp it is not entirely clear to me what the functional role is in input-output terms but let us say that it is Fp, which might include the input of stepping on a tack and the output 'Ow, that hurts'. Functionalism suggests that Mp and Fp are associated with a physical causal chain Pp(b(x)) which is an instance of a set of possible chains Pp(a,b,c,...(x,yz...)) where a,b,c are classes of realising systems and x,y,z instances thereof. Given what we know of brains it seems reasonable to assume that for a brain (b(x)) pain is the mental state only in the context of the specific physical causal chain Pp(b(x)) and not Pq,r,s...(b(x)). 


If Fp is an input-output relation in the context of the rest of the world it includes the last event in Pp(b(x)), say Pp(b(x))O, which, crucially, is an output interaction with the world which may depend on the state of the world at that time (the receptivity of the world to the output aspect of Fp). The nature of this event is not accessible to the brain b(x) (except through feedback from yet later events which may reflect it, but in an unreliable way suffering from verification constraints). Thus functionalism states that Mp is determined in part by Pp(b(x))O. That is to say that if Pp(b(x))O were to be instantaneously altered by some unexpected change in circumstances Mp would be different. Yet, in physical causal terms Pp(b(x))O must follow the chain Pp(b(x)), which can only be the sort of Pp(b(x)) that goes with Mp. Thus it appears that Pp(b(x))O must determine Pp(b(x)) (because Fp determines Mp), but Pp(b(x)) is what leads to Pp(b(x))O. This appears to be nonsense.


I can imagine arguments in defense relating to precisely what is meant by determine, but my non-verbal mind tells me that the paradox is genuine. Functionalism is incompatible with causality and therefore impossible. 


In simple terms: since nothing can be reliably informed of its output, nothing can be informed of, and thereby experience, its function.

2009-08-08
Is Functionalism impossible?
Hi Jonathan, if I follow your formalism correctly, Pb(b(x)) is a complex event which realizes Fp in the relevant situation, and Pb(b(x))O is a part of the former (it's "the last event in Pb(b(x))"). But Pb(b(x)) cannot be "what leads to Pb(b(x))O" as you claim at the end if the latter is part of the former. When you fill a glass of water to the top, the whole filling-to-the-top event doesn't determine, cause or lead to the adding-the-last-drop event. I suspect the problem here is that you're thinking of the last event both as "the last event in [the chain]" and what "follow[s] the chain".

2009-08-10
Is Functionalism impossible?
Two thoughts: (1) are you thinking of these definitions as defining particulars? if they are general, then a definition can pick out an input-intermediate-output pattern without requiring "looking into the future".  

(2) I think there is a conflict between certain versions of functionalism, and mental causation; see my
Mental Causation (Columbia U Press, 2008), Chapter 7, and Robert Rupert, "Functionalism, Mental Causation, and the Problem of Metaphysically Necessary Effects" (Nous 2006).

2009-08-11
Is Functionalism impossible?
Reply to David Bourget
Dear David,
As I defined it, Pp is a causal chain of events leading from input to output which realises the role Fp. I am aware that I cut a corner in saying that Pp(b(x) leads to Pp(b(x)O but the alternative seems to make more of a meal of this than necessary. (The filling of a glass of water does not appear to be relevant here since it is merely a sequence of events occurring close in time and space, not a causal chain in the sense of an A that causes B that causes C that causes D etc. - as you point out) So yes, by definition the last event in the chain is what follows the rest of the chain. So to be more precise Pp(b(x)O appears to determine [Pp(b(x)) less Pp(b(x))O] because it determines (in part) Fp(b(x)) and what neurobiology leads us to believe is that for b(x) Fp(b(x) will always be associated with the a certain type of internal causal chain of events [Pp(b(x)) less Pp(b(x))O], but [Pp(b(x) less Pp(b(x)O] determines Pp(b(x)O through physical causality.  

Thus series of arguments may cause unease about where boundaries are placed, but I think this is part of the problem with the whole functionalist concept.   I have a feeling that part of the problem is that functionalism attributes mental states to the functional roles of systems but systems cannot have functional boundaries if they are to interact with, and hence have a role in, the world. To have a role in the world a system cannot be treated in functional isolation from the world. Pp(b(x)O is neither system nor world - it is both. In fact the idea of mental content being associated with a role realised by a causal chain seems to me to be troublesome because there is nothing to stop an infinite regress in time. I can only conceive of a mental state being associated with a single event. Another point is that although Pp(b(x)O will be an event which has consequences for b(x) as well as world we know from basic neurobiology that motor acts do not involve feeding back of information about success of output. That has to be obtained by re-input through servo sensory systems. 

Maybe the simple formulation - that nothing can be reliably informed of its output, because this is retrocausation - still works best. I realise that if this was a simple matter to argue it would have been done long ago, but I am far from being the only person to feel that this is what it boils down to!

Best wishes.
Jo E

2009-08-11
Is Functionalism impossible?
Let me do some renaming to make this easier to read:

F = [Pp(b(x)) less Pp(b(x))O]
L = Pp(b(x))O
T = Pp(b(x))
(for "first, "last", and "total")

Your argument for the claim that L determines F appears to go:
  1. L partially determines T
  2. for b(x), T will always be associated with F.
Therefore L determines F.

(note that the "for b(x)" qualification is superfluous since F is by definition a component of T)

It's very unclear to me a) that "partially determines" and "always associated with" express the same relation; b) that this relation is determination; and c) that even if the relation deserves the name of "determination", there is any problem with L determining F while F causes L, in the relevant sense of "determining". Put more concretely, it seems to me that you're at most entitled to the following premises:
  1. L is an essential part of T
  2. T has F as an essential part
Clearly nothing incompatible with F causing L follows from this.

Here's another analogy. Take a cascade of dominoes. T is the event which encompasses the falling of all the dominoes. L is the last fall, and F is T minus L. I think that case is perfectly analogous. T is constituted by both F and L, so both "determine" T in some sense. T also determines both F and L because it requires them as parts. And F causes L. I don't see any relevant difference. But of course that example doesn't prove that dominoes can't fall!


2009-08-17
Is Functionalism impossible?
Jonathan, you are confusing functionalism with behaviorism.

I think what you are really saying is as follows: Suppose that Bob hears input Q at 2:00, thinks about it for a minute, from 2:00 to 2:01, and then gives output A.  You seem to think that functionalism holds that Bob's consciousness during that time depends only on Q and A.  So if someone shoots Bob dead one second before 2:01, so he never gives A, that would mean his consciousness from 2:00 until that time would be retrocausally removed or changed.

Of course that is nonsense.  Functionalism has nothing to do with input and output.  It holds that the internal functioning of Bob's brain, the states it passes through and the transition rules, is what causes his consciousness during those times.  Those things are obviously unaffected by future events.

2009-08-17
Is Functionalism impossible?
Reply to David Bourget

Dear David,

Unfortunately, your shorter notation seems to produce something bearing no relation to my argument, maybe because it has lost the explicit grounding in particular causal chains. You say ' It's very unclear to me that "partially determines" and "always associated with" express the same relation', which figures because they were not meant to. I fear that my thoughts remain uncommunicated, at least on list!

I always worry that reducing ideas to letters is dangerous and maybe I should not have tried. My plain English version probably is a better place to start. The problem is causality. Functionalism, ironically, is the one view that binds experience explicitly to causal chains - functional roles that affect the world - despite not minding what the chains are. Other views can associate experience with epiphenomenal brain events (not meaning that experience is epiphenomenal, which is another issue) at least some of the time. Being tied to causality, functionalism has to stand up to analysis of its causal coherence, and it does not seem to. It generates both retrocausation and an infinite temporal regress as far as I can see.

David Longinotti kindly put me right on my first query. My argument is probably logically the same as the swampman, which is well known in the literature. I wonder why it has not been universally accepted? Maybe the swampman is too hypothetical. I am not that fond of solving things with impossibly unlikely scenarios myself. I prefer an exposition that relies solely on showing that the formulation is self-contradictory.


2009-08-19
Is Functionalism impossible?

Dear Jacques,

No, I cannot be confusing functionalism with behaviourism. (I am a bit surprised how unfamiliar a professional philosophy forum is on functionalism and its problems!) My definition of functionalism was a readily available one written by someone sympathetic to functionalism. Behaviourism says that if you know enough neurophysiology you can give a complete description of the workings of Jim's mind without needing to know anything about what Jim feels: 'mental content' is irrelevant or even 'non-existent'. Behaviousrism is probably valid, but impotent because without the heuristic clues we get from assuming that Jim feels something like the way we do, the job is too hard. The problem is that if we misinterpret these heuristic clues, as I think functionalism does, we end up with a worse stalemate.

Your account of Bob at 2.00 to 2.01 is fairly close to functionalism, although functionalism states that Bob's mental content is determined by the role in the world of the entire causal sequence from Q to A. It also allows for several mental states between 2.00 and 2.01 the function in the world of some of which might just be to send certain ideas to memory. However, the absurdity that you instantly recognized comes out the same if you spell out exactly what functionalism has to entail. As long as you do not spell it out it sounds reasonable, but if you do, down to the last physical interaction (literally, the output) it crashes. You seem to agree that functionalism is impossible!

The alternative position you suggest that we call functionalism must be the right one, but it cannot be functionalism. 'Function' can have three meanings: internal action, external effect or purpose. Forget purpose and Dennett's intentional stance. If we disallow output we disallow effect, so we are left with internal action. However, functionalism, as I understand it, was deliberately designed to contrast with a 'reductionism' that relates experience to specific physical action. It holds that function is multiply realizable: the internal states can be anything. All that matters is that they have the right role in the world. (I think this dates back to Putnam's ideas of meaning being external; others may correct me.) As indicated above, even if we think of 'role in a microcosm such as a brain' we have something defined by effects external to the bit with the role to be assigned a mental state. Spell out the details and you get an infinite regress. 

Information associated with a physical event is local to the event. It does not 'carry over' in a chain in such a way that you could add all the little actions together to be the content of consciousness. (Whatever you got would be causally/computationally meaningless.) 'Functions' of chains of events have to be considered not as actions but in terms of causal significance to distant events - which is effect and implies output. You cannot eat your cake and have it on this one as far as I can see. The various meanings of 'function' are constantly conflated in all branches of science and I suspect functionalism is the prime example.


2009-08-22
Is Functionalism impossible?
Dear Jonathan:

There are many definitions of terms like functionalism - and different subtypes of functionalism - used by different people, so that is no doubt part of the confusion.

There is no conflict between multiple realizability and emphasis on internal action.  However, perhaps what I described would be better called computationalism.  The states must have the right structure and the right transition relations, but that structure is specified in abstract terms such as a set of memory elements, not identified with specific physical features.  I know that some people consider it a form of functionalism, while others regard it as a seperate issue.

I think the point you are missing is that function can be specified relative to immediately-following internal events, not just distant events.  If there is an analog system you can call that an infinite regress, but it is no paradox, just a differential equation.

Also, things like the bullet can be considered as input.  Whether it happens or not, the internal states until that time would be the same and have the same functioning up until that time.  Perhaps there are definitions of functionalism that suggest otherwise, but I don't think anyone who supports it actually has that in mind as something that is supposed to be a feature of it.

2009-08-29
Is Functionalism impossible?
Isn't the point of functional explanations that they do not resemble descriptions of physical causal chains?

The notion of "mental states" does not pick out any specific physical entity, process, or causal chain.  A functional role is not a series of events leading from an input to an output.  Rather, it is how an input/output relation behaves in a process or system.  Functionalism asks us to stop focusing exclusively on the input/output relation itself, and to look instead at how that relation functions in a context. The question is not what leads from input to output, but what depends upon and uses that relation.  The mental state is not what happens between the prick of the pin and the visible behavior.  It is not what causes the behavior.  It is rather what we say about the behavior in so far as we regard it as a reaction to the prick of the pin.

Mr. Edwards (please pardon the formality, but I am still not sure how best to address professors in this forum) notes that the output is crucial for determining the mental states.  Yet, he says, "in physical causal terms Pp(b(x))O must follow the chain Pp(b(x)), which can only be the sort of Pp(b(x)) that goes with Mp."

I think this is a mistake, perhaps resulting from a temptation to use the phrase "mental state" to refer to a specific causal chain which produces the resultant behavior.  This is an old intuition about mental states; specifically, the intuition that mental states must be known or experienced by the people who have them regardless of their output.  Mr. Edwards also indicates this when he says that, "since nothing can be reliably informed of its output, nothing can be informed of, and thereby experience, its function."

Functional roles do not specify physical causal chains at all.  While a functionalist could say the output determines the mental state, they would not use the phrase "mental state" to refer to the preceding causal chain.  This means that mental states are not known to us prior to or any more reliably than their outputs.  The only problem is in letting go of that intuition.

2009-08-29
Is Functionalism impossible?

Dear Jacques,

There may be many definitions of functionalism but I have not yet been presented with  evidence to indicate that they vary in regard to the central tenet I was addressing. I am not sure whose confusion you are referring to, but more of that anon. 

You say that there is no conflict between multiple realizability and emphasis on internal action but do not explain how this can be if multiple realizability holds that it does not matter what the internal action is as long as it has the right effect. You then, in the first of a series of shifts of the verbal goal posts, rename your functionalism computationalism (conceding that functionalism is indeed impossible?). I am not sure how this helps since a computation is an input-output relation. You now suggest that mental content depends on structure and transition relations. I see that these may determine the range of possible experiences but we are interested in what determines a particular experience in the context of particular internal events. Perhaps by 'structure' you mean dynamic structure=events? Or maybe transition relations are the events, but last time you said transition rules, which suggests you mean parametric rules of cause and effect, which brings us back to input-output relations. Then you say that the structure is only relevant in an abstract sense - ie in a selected (abstracted) interpretation. But, as William Seager pointed out in JCS in 1995, the entity with the mental state will have no way of knowing which of a more or less infinite number of possible selected interpretations of its internal state is to be followed. And none of this shifting around of jargon addresses my detailed arguments about the problem of the final output step and my original point that nothing can be informed of, and thereby experience, its functional role because it cannot be informed of its effects. Experiencing things without being informed of them is usually called clairvoyance or telepathy and is inconsistent with causality.

If you look at the posts you will see that I am not missing the fact that function can be specified relative to immediate following events. I pointed out that 'role in the world' can refer to an arbitrary microcosm thereof and that this does not change the argument. The entity,  pared back until it is one of only two actions, (to avoid anything other than an immediate relation) must have an experience commensurate with its 'microrole' which still involves its effect on (output to) an immediately following event. You have the additional problem of adding up microroles, which has no legitimacy I can think of. The regress implicit in ascribing experience to a causal chain is independent of the discrete or continuous nature of variables involved. Why would an experience correspond to a chain of six microroles rather than all those back to the big bang? Functional roles do not come with ready-made brackets like in a 'C' programme.

Perhaps the crux is, as you say, that 'I don't think anyone who supports [functionalism] actually has [experience depending on effect or output] in mind as something that is supposed to be a feature of it'. Exactly: it may be there writ large in the definition but it is unclear that supporters of functionalism have realised what is implied - they do not have it in mind because they have not thought it through. In my experience that is the situation more often than not for fashionable ideas.

The obvious alternative is that experience is determined only by the content and mode of input to an experiencing entity - which is more in line with common sense anyway.


2009-09-02
Is Functionalism impossible?

Dear Jonathan,

I think that the confusion I was referring to, regarding different definitions of functionalism, is pretty much universal :)

"You say that there is no conflict between multiple realizability and emphasis on internal action but do not explain how this can be if multiple realizability holds that it does not matter what the internal action is as long as it has the right effect."

Multiple realizability, as I use the term, just means that there are multiple types of physical architectures in which the same (computational, functional, and mental) states could arise.

I am not trying to shift goal posts.  I admit that, having reviewed some more references, it's clear that some forms of functionalism are not compatible with computationalism, especially those that use externalism.  But, I think, other approaches that are called functionalism are compatible with it.

"You now suggest that mental content depends on structure and transition relations. I see that these may determine the range of possible experiences but we are interested in what determines a particular experience in the context of particular internal events?"

The particular computational states that occur.

"Then you say that the structure is only relevant in an abstract sense - ie in a selected (abstracted) interpretation. But, as William Seager pointed out in JCS in 1995, the entity with the mental state will have no way of knowing which of a more or less infinite number of possible selected interpretations of its internal state is to be followed."

Please clarify what you mean.  Why do you think that's a problem?

Anyway, I think an example will help.  For this example I will consider relatively simple computations.  Presumably, conscious computations would have to be much more complicated, but it should show the idea.  So let us pretend that these computations can be analyzed using theories of mind.

The system takes a whole number N (within the range 1 to N_max) for input, it computes for a while, then it outputs the Nth prime number.

Suppose that it can use one of two algorithms. Perhaps #1 is more memory efficient than #2 but let us suppose they take the same amount of time to run, so that an outside observer can't tell which one is being used.

First, it's clear that these abstract specifications are multiply realizable.  What is relevant is the structure (the set of variables specified by each algorithm must be reflected by using physical memory elements or such), the transition relations (such that the algorithms are reliably implemented), and the particular computational state (determined by the input and the number of time steps that have passed).  Computationalism is explicit about that.  Functionalism may need to be  supplemented by those requirements.

But since the input-output relations are the same for both algorithms, is functionalism per se committed to saying that any mental states they give rise to must be the same?  I would say it is not, because the internal states are different.  We can always regard the next internal state as the output to the current one.

"And none of this shifting around of jargon addresses my detailed arguments about the problem of the final output step and my original point that nothing can be informed of, and thereby experience, its functional role because it cannot be informed of its effects. Experiencing things without being informed of them is usually called clairvoyance or telepathy and is inconsistent with causality."

The functional role, here, is specified by which algorithm is being run and by what the current state is.  There is no way for the system to know, in the intermediate stages, what the output will be when the run is finished, but also no need to know that - just the fact that the algorithm is what it is is enough.

"The obvious alternative is that experience is determined only by the content and mode of input to an experiencing entity - which is more in line with common sense anyway."

That alternative, considering only input but not output, is no better as far as I can tell.  It runs right into the swampman example - swampman had none of the usual input, but his consciousness is surely the same as that of a regular person.

Computationalism has no problem with swampman - he begins implementing the algorithm at an intermediate stage, that's all.

Some functionalists would hold that swampman's consciousness is different, based on external functioning.  I would say that your argument might have traction against those functionalists, but I never saw any reason to take their externalism seriously anyway.

I would like to see some people who support functionalism per se respond to this and clarify just what functionalism means to them.

Regards,

Jack


2009-09-22
Is Functionalism impossible?
I am not sure that we are getting anywhere. I am getting reaffirmations of faith in functionalism but nobody addressing the logical paradox that was the basis of my initial post. 
In reply to Jason's: "Isn't the point of functional explanations that they do not resemble descriptions of physical causal chains?

The notion of "mental states" does not pick out any specific physical entity, process, or causal chain.  A functional role is not a series of events leading from an input to an output.  Rather, it is how an input/output relation behaves in a process or system.  Functionalism asks us to stop focusing exclusively on the input/output relation itself, and to look instead at how that relation functions in a context. The question is not what leads from input to output, but what depends upon and uses that relation.  The mental state is not what happens between the prick of the pin and the visible behavior.  It is not what causes the behavior.  It is rather what we say about the behavior in so far as we regard it as a reaction to the prick of the pin."

Yes, this seems to be what functionalism says, and I am saying that it cannot work because it involves a causal impossibility. It appears to invent an arbitrary heuristic concept outside our normal rules of explanation, which as you say is not based in events in the world, but rather in how some arbitrary third party deems to interpret those events. If a mental state is just what we say about something it is not a state in any sense that is normally understood. It is merely a conceit about a putative state. I am afraid I am only interested in the sate that causes the behaviour, the experience that goes with it and the relation between the two. I see no reason to invent some other concept, especially if it violates the rules of the real world.


'This is an old intuition about mental states; specifically, the intuition that mental states must be known or experienced by the people who have them regardless of their output.'


I would not regard this as an old intuition, merely as a reasonable definition of a mental state. As I have said, I cannot understand what this other concept of a mental state might be. If it is not an experiential state then why not call it a brain state, in which case it ought to have a perfectly normal physical causal description. If the point of functionalism is not to explain what experiences go with what brain processes I cannot think what it is for. Ordinary science does the rest.

It seems to me increasingly that functionalism has the features of a religion which, once you are converted to it, allows you to say things that cannot apply to the real world and also to say that those who do not believe are simply unenlightened. Unless functionalists can address my original paradox with some real counter-arguments I think I can rest my case.


(I could discuss the issue of functional roles specifying causal chains in specific systems at further length but my first post seems to deal with this reasonably clearly. For a given computer or brain with  given history a specific functional role is likely to be associated with a specific causal chain at least to a degree sufficient for the purposes of my argument.)






2009-09-27
Is Functionalism impossible?
Hi Jon. You wrote:
I would not regard this as an old intuition, merely as a reasonable definition of a mental state. As I have said, I cannot understand what this other concept of a mental state might be. If it is not an experiential state then why not call it a brain state, in which case it ought to have a perfectly normal physical causal description. If the point of functionalism is not to explain what experiences go with what brain processes I cannot think what it is for. Ordinary science does the rest.
As I understand it (I'm studying this at the moment) what functionalism focuses on is neither experience nor physiology but logic. It is based on the concept of the Turing Machine, the output of which is determined by the input in conjunction with, or we might say in the context of, the internal state. The implementation, the physical arrangements, are irrelevant, assuming that they support the logic required. The concept of experience, or consciousness, has no part to play in this, though many believe it amenable to a functionalist explanation. But the main point here, I think, is that the functionalist concept of a mental state does not differ in principle from the internal state of a Turing Machine or, indeed, that of any existing computer. (A general purpose computer is a Universal Turing Machine.)

Here are some references from my course material:

Andy Clark, Mindware: an introduction to Philosophy of Cognitive Science, ch.1&2
Jaegwon Kim, Philosophy of Mind, ch.5
Tim Crane, The mechanical mind: a philosophical introduction to minds, machines and mental representation, ch.3

2009-09-27
Is Functionalism impossible?
I am just trying to clarify the burden of proof here.

Mr. Edwards argues that the functionalist view of the mind, when applied to a physical process, entails backward causation.

Mr. Streitfeld responds that a functional explanation is not tied to any particular physical process.

But functionalists are generally token physicalists, in which case a functional explanation presumably must be consistent with physics.  So I think Mr. Streitfeld needs to show either why Mr. Edwards is wrong in his assertion that functionalism contradicts the traditional view of causation, or why that view of causation is not relevant to token functionalism. 

2009-09-27
Is Functionalism impossible?
As my formulation of functionalism implies, a functionalist would not (or at least, should not) claim that a mental state is a series of events leading to an observable behavior.  There are no "functional states," in that sense.  (I think I am in disagreement with David Lewis here, but I'm not familiar enough with his work to say much about that.  Though I'll just mention that we might be able to distinguish between two varieties of functionalism, and I am supporting the Rylean variety.)  So your initial argument against functionalism is not sound.  No counter-arguments are necessary, because your argument does not address (Rylean) functionalism.

You seem to acknowledge this fact by accepting my (Rylean) formulation of functionalism.  Your argument against functionalism, then, cannot be the argument about causal chains you originally presented.  Which leads to your recent argument, which is not so clear. 

You say functionalism is "an arbitrary heuristic concept outside our normal rules of explanation, which as you say is not based in events in the world, but rather in how some arbitrary third party deems to interpret those events."

I didn't say functionalism requires unusual or abnormal rules of explanation, or that it is not based in events in the world.  But, yes, the assignment of functional roles does depend on acts of interpretation.  This seems quite ordinary.  We often predict behavior by referring (either explicitly or implicitly) to dispositions, and not specific states or processes.  To say X has a functional role is to say X is disposed to behave in this or that way, and that this disposition serves some function in a given context. It is not to say that X has this or that state.

You say, "If the point of functionalism is not to explain what experiences go with what brain processes I cannot think what it is for. Ordinary science does the rest."

I think the point of functionalism is, among other things, to explain why our notions of various experiences do not pick out specific brain processes.  Any work to be done explaining experiences in terms of brain processes is there for science, and not philosophy, to discern.

And I do not think a reasonable definition of "mental state" is "a state which can be known regardless of its outputs."  Sure, that definition probably appeals to many people, but that does not make it philosophically sound.  If mental states can be known regardless of their outputs, then they are not defined by their outputs.  So, why claim they have outputs, or consequences?  And according to what are they known, if not by their consequences?

As Wittgenstein says, we have a clear enough idea of what it means to know something.  We have a grammar of the word "know," but we run into problems when we try to use it in inappropriate ways.  Your definition of "mental states" applies the word "know" in a way that doesn't seem to make sense.

You might appreciate Ryle's The Concept of Mind.

Regards,

Jason
Sept. 24, 2009

2009-09-28
Is Functionalism impossible?
Dear Robin,
Sorry Robin, but that reply seems bizarre. I have in front of me John Heil's Philosophy of Mind compendium, that includes many of the classic papers on functionalism, including Lewis, Putnam, Fodor and Armstrong. Almost all of them start off with the objective of finding an account of what it is to be in pain, as I do. (Tim points this out on page 132 of MM.) Pain is an experience. It seems to be chosen because it is challenging in that it does not seem to be anything more than an experience. It is not a quality of something else. These authors refer to mental states, or states of mind, in the way that we usually do - being in pain or having a belief. If all we are interested in is the logical or computational significance of some brain events in the context of some external referents then to say that such states are exhausted by their logical or computational significance in the context of some external referents does not seem to be the sort of thing anyone would want to argue about for four decades. Maybe philosophy really is that dull but I hope not. (Last time I was in Edinburgh, Galen Strawson warned me to keep away from philosophers, but at least half in jest.) Maybe ask Andy what he thinks?! I am afraid I give up on this one. At least David Longinotti seems to understand what I have been saying.

Best wishes

Jo

2009-10-13
Is Functionalism impossible?
On reflection, maybe I see where Robin's comment comes from. It may relate to a suspicion I have had about functionalism for a while - that it conflates the meaning, content or significance of a mental state to its owner with the meaning, content or significance to the rest of the world. Crucially I wonder if it is based on a failure to see that 'meaning' is polysemic and subject-dependent.
Maybe it started with someone like Putnam asking how we explain the meaning of a mental representation - say of a cow. We think there are biophysical events underlying this representation but want to know how the biophysics relates to the content. If we fall into thinking of meaning as meaning to the world in general then it seems fairly obvious that the specific physical substrate of the representation is not going to matter. All the representation needs to do is have the right functional role in relation to the rest of the world and cows as referents. This would appear to be a platitude.

However, if we are interested in the meaning of the representation to its owner there is a feeling that the specific physical substrate might matter, which is the Chinese Room issue. I find it hard to believe that a set of events which fulfill the correct functional role to be a representation of a cow as defined by the outside world occurring in a Chinese Room like device wired into my brain would mean 'cow' to me. My original post related to the fact that this feeling of unease is supported by the fact that such a determination of meaning-to-me would be causally impossible. 

This seems to lead to two conclusions. Firstly, functionalism must either be interpreted as a tautological platitude or as something impossible. Only if the multiple meanings of meaning are not recognised do we have something that might look interesting until we realise that it is one of those verbal mistakes Wittgenstein warned us of. Secondly, if we are looking for representations in brains then we need to distinguish representations with meaning C to the outside world and representations with meaning C to the owner. They will not be the same, and for good reason. This seems to be an important issue in practical applications of theories of the mind.

2009-10-13
Is Functionalism impossible?
I think one of the problems with the chain of causation argument, is that it assumes that the chain is not a loop. If it is a loop, then the end result of the chain, could indeed be caused by the previous end result of the chain, although defining what the end of the chain is, is problematical, for the purposes of this argument, consider the end of the chain to be the point just before the loop is rejoined.

2009-10-13
Is Functionalism impossible?
Dear Jonathon,
 I am a latecomer to this post, but it seems to me that functionalism/computationalism always worked best for desires and beliefs, which the  desirers and believers do not have reliable access to, indeed are often mistaken about. I did not know that I believed finctionalism to be nonsense for pains till I read this post. Turing machines do not have pains and pleasures, and those are the mental states we are aware of when we have them.  It would be crazy to try to give them to our computers! The main problem, I think, is the rag-bag category of "mental state," which we may well owe to past philosophers of mind, such as Descartes and Hume. At least Descartes knew that some mental states are very confused, about themselves as well as about their contents. And Hume knew he had severe problems construing his mind as a casual sequennce of discrete conscious states. He thought pleasure and pain do play the role of motivating both our behavior and our attempts to improve our beliefs. But he was no functionalist about them. Belief he did take to play the role of directing action, so if you believe there is a tack in your foot, you will try to remove it, whther or not you say "Ow!". For a sophisticated version of taking the content of beliefs to be this Humean one of affecting intention and behaviour, I recommend that of Hugh Mellor. But no theory that puts what we are immediately conscious of, such as content of current thought and pain, with what we have to find out about, such as what we believe about functionalism, and what we can be wrong about, is gong to be a defensible theory. Or so it seems to me.
 Annette Baier   

2009-10-13
Is Functionalism impossible?
Jon,

I don't think a functionalist would see any conflict between omitting experience/consciousness from the principles of functionalism, on one hand, and seeking to explain phenomena such as pain using these principles, on the other.

As a lowly student at the beginning of a taught MSc course I didn't "ask Andy", but went back to the book already mentioned instead. Under the heading of Machine Functionalism he writes "To be in such and such a mental state is simply to be a physical device, of whatever composition, that satisfies a specific formal description." (Andy Clark, Mindware, p14) His own position is indicated when, in the context of the idea of the brain as a "meat machine", he says "The attractions of such a view can hardly be overstated." (ibid p8)

As for whether "philosophy really is that dull", I find the arguments around functionalism absolutely fascinating!

2009-10-13
Is Functionalism impossible?
Mr. Longinotti,

I have not challenged token physicalism.  The point of functionalism (of the kind I am supporting) is that the language of mental states does not pick out particular existent events, states, or processes at all. It is not that they pick out non-physical (or non-causal) entities.  Rather, following Ryle, we may regard the language of mental states as a modal, hypothetical language.  When we attribute mental states, we are talking about how a person is likely to behave.  We are not indicating specific events or processes or states, though we are not denying that the behavior in question is the result of physical processes.  We are not indicating a causal chain of the sort Mr. Edwards suggests, but nor are we indicating any other sort of entity or process.

So functionalism as I understand it is perfectly compatible with token physicalism, and it is immune to Mr. Edwards' objection.

Regards,

Jason
Sept. 30, 2009

2009-10-16
Is Functionalism impossible?
Dear Robin,I agree that functionalists may see no conflict where there is one - I guess that was the point of the thread.

I have not even started my MSc course (even if I do contribute reasonably regularly to a journal that, although not giving me 'pro' status, is well thought of by some) but I found Andy Clark very approachable and good to talk to. Maybe he is a functionalist! Maybe he has changed his mind! And I am not sure you have to be a functionalist to endorse the view that a brain is a meat machine. I have no problem with that view myself.

My other point was that if functionalism were all there is then philosophy would be unbearably dull. Fortunately philosophy seems to be about eminent interesting people holding such diametrically opposed views that they have to admit in private that they think the others are barking mad. That in itself seems to tell us something very important about the way our minds work, so I tend to think that philosophy is as exciting as you want to make it. The arguments around functionalism are indeed fascinating. How is it possible for so many nice intelligent people to believe in what others see as a transparent logical contradiction? And, as at the Piraeus, there is the added frisson that there are no right answers at the back of the book.

Best 

Jo

PS for Graeme: I cannot see that a 'loop' in the sense of a sequence of events involving the same material domain repeatedly alters the argument. A causal loop of the sort Penrose proposed with light cones bent back on themselves by gravity might, but I doubt we want to get into that territory.

2009-10-16
Is Functionalism impossible?

Mr Streitfeld<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

 

I fail to understand in what way your view can be characterized as ‘functionalist’.  You seem to be, like Ryle, a logical behaviorist.

Historically, functionalism was formulated in opposition to logical behaviorism, due to the latter’s apparent inability to provide purely behavioral definitions of psychological states (see Putnam’s “The Nature of Mental States”).  The most fundamental tenets of functionalism are (1) that behavior is mediated by internal states, and (2) that these states are inter-definable as mental states by the ‘pattern’ of their causal relations.  By design, functionalism is inconsistent with logical behaviorism.

 

I believe you misunderstand functionalism where you assert its point is that “the language of mental states does not pick out particular existent events, states, or processes at all.”  This is true for logical behaviorism but not for functionalism, which holds that to be in pain is to be in a particular state of the brain (or its equivalent), a state that had a specific causal history and that will have specific causal consequences.  An implication of functionalism is that, in principle, one could look in the brain and find the particular internal state that is identical with an instance of pain.  Where functionalism is ‘non-specific’ is only in its claim that, like a computational state, a token pain need not be realized in a particular type of material.  This is not the same as the assertion that it does not pick out a particular event.

 

Since you express the views of a logical behaviorist, could you please explain in what sense you take your view to be functionalist?

 

regards,

 

David


2009-10-17
Is Functionalism impossible?
David (if I may be permitted the informality),

Indeed, I am supporting logical behaviorism.  Though I see no disparity between it and functionalism. 

I've just read the Putnam piece you mentioned, and I do not see how it could be construed as a fair criticism of logical behaviorism.  Putnam does not ask us to regard functional states as brain states.  Quite the contrary.  They are defined in terms of the behavior of the entire organism.  Putnam does not require a one-one correspondence between internal states and functional states of an organism.  As he says (emphasis in the original), "knowing the Total State of a system relative to a Description involves knowing a good deal about how the system is likely to 'behave', given various combinations of sensory inputs, but does not involve knowing the physical realization of the S1 as, e.g., physical-chemical states of the brain (Putnam, "The Nature of Mental States," in Mind, Language, and Reality, 1975, p. 434)  Dennett, another functionalist (and also somewhat of a supporter of logical behaviorism), would agree, I believe.

Putnam falters when he tries to oppose his functionalism to a "behavioral dispositions" approach.  He does not adequately substantiate his claim, but rests it on the spurious claim that dispositionalists fail to specify the behaviors we associate with pain.  I cannot imagine any logical behaviorist denying that we could enumerate any number of ways of identifying pain behaviors.  What a logical behaviorist should question is Putnam's claim that our functional states are always discrete and clearly definable.  But I don't see why a functionalist should be wedded to such a notion.  I'm quite sure Dennett isn't, for example.

Logical behaviorism does not deny your first functionalist premise, (1) that behavior is mediated by internal states, unless you take "mediated" to have some bizarre connotations.  Obviously our behavior is caused by our brains, and there is no reason to ignore the fact that our organs can have more or less well-defined states.  Though we need not claim that they always have well-defined states, or that our well-defined behavior always corresponds to well-defined internal states.

Your (2) seems less benign, though I am not convinced it is a necessary postulate of functionalism.  As Putnam suggests, it is not that our neurological states cause other neurological states to behave a certain way; it is that certain neurological processes cause an organism to behave in certain ways.  The functional state of the organism can be explained (at least partially) by the workings of the brain, but we do not regard specific aspects of those workings as the state in question.  This is perfectly in line with Ryle's approach, as I understand it, though Ryle might avoid the term "state," I think, as it might lead us to adopt an overly simplistic view of dispositions.

Also, I do not think Ryle would call pain a behavioral disposition.  He distinguishes between feelings and mental states, and regards the latter as intelligent dispositions.  Feelings such as pain are indicators of agitations.  Pain is an inhibitor, not a propensity. (Ryle, The Concept of Mind, 1949, p. 106).  Ryle would not deny the physiological nature of such processes.

So I think, following Ryle, that if we are to focus on the philosophy of mind, as opposed to the physiology of emotion, we should not confuse feelings with mental states.  Instead of discussing pain as a mental state, we might focus on "knowing that one is in pain" as a mental state.  Such mental states are intelligent dispositions, which we can describe in terms of an organism's functional states, so long as we do not oversimplify what that entails.


Regards,

Jason
Oct 16, 2009

2009-10-24
Is Functionalism impossible?
Jon,

Just a quickie, because at the moment I'm slightly worried about getting behind on my coursework. As you might guess, under Andy's leadership, the embodied/embedded/extended mind scenario is quite popular here, so quite a lot of time is spent discussing it, and it is most definitely a functionalist enterprise. So yes, Andy is a functionalist, and no, he's not changed his mind recently, on that issue anyway. I should perhaps hesitate to speak for my academic superiors, but I really don't think there's any room for doubt about that.

Robin

2009-10-25
Is Functionalism impossible?
It does not surprise me Robin. I am at present involved in a deep discussion with eight other people and we all completely agree with each other person on at least one issue and completely disagree with them on at least one other issue. We have nine viewpoints which all overlap but are also all incompatible. Perhaps next time I meet Andy I will have to ask him how functionalism can be possible.