Back   

2009-05-25
Concepts: The Very Idea
I'd like to invite discussion on "Concepts: The Very Idea" (Harnad 2009). This is a discussion paper for a symposium on Edouard Machery's "Doing Without Concepts" (OUP 2009).
ABSTRACT: Machery (2009) suggests that the concept of “concept” is too heterogeneous to serve as a “natural kind” for scientific explanation, so cognititve science should do without concepts. I second the suggestion, and propose substituting, in place of concepts,  inborn and acquired sensorimotor category detectors and category names combined into propositions that define and describe further categories.

2009-06-03
Concepts: The Very Idea
Reply to Stevan Harnad
Your suggestion bears some resemblance to a proposal to split concepts into nonlinguistic and linguistic concepts, which Sam Scott and I made in our paper Splitting Concepts (published in Phil Sci, 2006)

http://www.umsl.edu/~piccininig/Splitting_Concepts.pdf

Do you agree that your suggestion is along similar lines, broadly speaking?

2009-06-04
Concepts: The Very Idea
GP: Do you agree that your suggestion is along similar lines... to split concepts into nonlinguistic and linguistic concepts...?
Perhaps, but the devil's in the details. I am not splitting concepts, I'm scuttling them (unless we're content to demote them to whatever turns out to be the brain mechanism that gives us our capability to categorize things). There is really only one way to detect categories, and that is via sensorimotor detectors. But categories can also be defined and described in words, so that someone who knows the description and has sensorimotor organs can identify what is and is not in the verbally described category -- on condition that the words in the description are themselves either grounded directly by sensorimotor category, detectors or the words in their description are grounded directly... etc.


Harnad, S. (1990) The Symbol Grounding ProblemPhysica D 42: 335-346.










2010-02-06
Concepts: The Very Idea
Reply to Stevan Harnad

The fact that even opponents of concepts continue to use the term shows that it has an intelligible meaning. If it did not, “the concept of ‘concept’ is too heterogeneous to serve as a ‘natural kind’ for scientific explanation,” would result in the rejection of the paper positing it as too vague and ambiguous for serious consideration of a journal article. Referees would simple scratch their heads being unable to grasp what the author was intending to say. The very fact that the assertion is meaningful argues that it is in error.

 

Of course, it is argued that “we are entitled to continue using ‘prescientific’ folknomics to keep going while we wait for the science.” This may be true, but whatever nomenclature one uses, it needs to have precise meaning to express scientific hypotheses. To say that  “the X of ‘X’ is too heterogeneous to serve as a ‘natural kind’ for scientific explanation,” is not scientifically intelligible if X is too heterogeneous to serve as a “natural kind” for scientific explanation.

 

Of course, the use of “concept” in ordinary language may be as wide as that of “game.” Still, a precise definition is available, for classical logic is very clear as to the meaning of “concept” in its domain. (See, for example Veatch, Intentional Logic.) There, “concept” means an instrument of thought signifying a note of comprehension, e.g. classes of objects or properties. Such concepts may be linked by a copula to form a judgment, but an isolated concept neither affirms nor denies.

 

Taking apples as an example, we may have objective *apples*, a word “apples”, and a concept <apples>. “Apples” betokens *apples* by evoking the concept <apples>, which intends *apples* because it is potentially evoked specifically by *apples*. (Potentially is crucial because the meaning and intent of a concept does not depend on actual existence, nor does it depend on an actual enumeration of its instances.) Thus, the non-existence of unicorns does not militate against <unicorns> being a valid concept. Nor do concepts lack universality because they have encountered every instance of their application.

 

This is certainly adequate to pick out a natural kind by anyone that admits that humans employ instruments in thinking. Or do the opponents of concepts think that we do not think and wish to convince their readers to think that we do not think? But, then, how could they?

 

Stevan Harnad suggests that when someone in his lab utters “concept.” “invariably what they mean is the unknown cerebral wherewithal that generates the capacity to do something or other.” Let me suggest that this is simply false, as the concept is not unknown at all. What is unknown is how to reduce it to a “cerebral wherewithal,” which is far different. People are quite aware that they think and that in thinking they use instruments of thought that signify what they are thinking about. Further, many times the thinking is about a problem which is not solved, and perhaps insoluble, so the thinking does not terminate in “the capacity to do something or other.”

Let us reflect on this second point as it is essential to Harnad’s thesis. He is seeking to redefine “concept” as the “cerebral wherewithal that generates the capacity to do something or other.” But, if there is no termination in a “capacity to do something or other,” but only inarticulate irresolution, there is still a conceptual process albeit not one meeting his definition. Harnad is entitled to define terms as he wishes, and perhaps, as an authority figure, to demand that others follow his lead, but that does not change the fact that many conceptual processes do not terminate in any new capacity to act. If there is no new capacity to act, nothing is generated and the definition fails. This reflects the contemporary confusion between thought ordered to action (praxis) and thought ordered to contemplation (theoria).

Homunculi are a theoretical construct that leads to an infinite regress of homunculi within homunculi and useless as an explanation. Further, even if there were a homunculus within me, awareness in that parasite would not explain why I am aware. So homunculi are unintelligible as an explanation of consciousness. We must simply accept the contingent fact of experience that humans are aware as a datum. Indeed positing homunculi affirms by performance that there is an awareness which is otherwise unexplained by cognitive theory..

So, it is absurd to reject percepts because the concept is “homuncular.” Our awareness of percepts is not an awareness of homunculi, which are added as a (failed) hypothesis. It is like taking something which is not black, painting it black, and then rejecting it because it is black. Again, we know that Aristotle’s theory of gravity is untrue, as is the theory of homunculi. Yet we do not reject the datum that things fall because Aristotle’s theory fails to explain it.

 

It is a category error for Pylyshyn to take a datum, as our having percepts is, and reject it as “nonexplanatory.” It is not the job data to be explanatory. Rather it is Pylyshyn’s job as a theorist to explain that datum. Painting data with stain of being “folkish” or “nonexplanatory,” does not make it go away.

Concepts and percepts are not unobservable, but observed by all who experience them. They are not intersubjectively observable, but that is a different thing – a failed criterion proposed by logical positivism. The root problem here is that the methodological paradigm of the physical sciences does not fit the problem mind. Physics is concerned with objective phenomena, so it is proper to filter out subjective data. Since the mind is the knower as subject, subjective data  is definitive of it. Of course we can investigate the brain as an object, but that cannot yield, and has not yielded, theory of consciousness for awareness is the act of being a subject. It can only yield a theory of data processing absent awareness and that is precisely what it has yielded.


2010-02-11
Concepts: The Very Idea
Reply to Dennis Polis

Much of the content of Dennis Polis's post I would concur with. The target paper seems to be based on an externalist view of concepts of the sort that Hinzen, in my view successfully, sets out to demolish in his Essay on Names and Truth (Oxford 2007). I would agree with Polis that concepts are primarily defined in internal terms - either as components of experience (if often 'thin') or as the internal resources that generate those components of experience. The resources that generate associated sensorimotor events are only contingently associated. 

I have sympathy with a desire to banish 'concept' from a research environment if the word has become imbued with adverse theoretical presuppositions - particularly externalist ones. But that does not seem to be Stevan Harnad's motivation. What I find interesting is that both Polis and Harnad seem most at pains to avoid implicating 'homunculi'.

I continue to be puzzled by the claim that an infinite regress occurs when postulating homunculi, at least defined as subdomains of the brain that receive, from other parts of the brain (such as V1 etc), the inputs that form the basis of the experience of what Polis calls 'I'. It is widely acknowledged that such a regress only relates to Dennet's 'straw homunculus' that repeats entirely the talents it is rung in to explain: probably only postulated by anti-homunculists. If it is possible to explain the association of experience with the biophysics of a whole human body, nervous system or brain, whichever preferred, then it is presumably possible to do so for a subdomain of brain. Moreover, neurobiology suggests that the contents of experience are encoded in restricted pathways that carry data selected from a wide range of sensory and subliminal memory data, not to mention skeins of housekeeping interneuronal pathways.

My impression is that the real impasse in philosophy of mind is a fear of accepting that percepts and concepts, including the sense of being 'I' are part of the experience of inputs to very small components of brains. Homunculi are absolutely fine; there is no need to be frightened of them. They are very likely us and have concepts encoded in their inputs. I would submit that if we try and find out what they might be then we might explain what concepts are.


2010-02-11
Concepts: The Very Idea
Reply to Dennis Polis
DOING WITHOUT CONCEPTS DOES NOT MEAN CONCEIVING WITHOUT DOING

DP
: "the term [concept]... has an intelligible meaning

So does the word "notion," or "thing" -- but they won't get you very far if you want to explain rather than just point.

DP: "a precise definition...: “concept” means an instrument of thought signifying a note of comprehension, e.g. classes of objects or properties. Such concepts may be linked by a copula to form a judgment, but an isolated concept neither affirms nor denies"

Instrument? "thought"? "note"? "comprehension"? "judgment"? These all sound, at best, like uncashed pointings.

"Classes" I understand. But those are classes, and we were inquiring about concepts.

And I understand "propositions" (as in "The cat is on the mat") and truth values, T&F, and affirming and denying. And words, and referents.

But we were talking about concepts, and in particular, explaining them.

DP: "we may have objective *apples*, a word “apples”, and a concept . “Apples” betokens *apples* by evoking the concept , which intends *apples* because it is potentially evoked specifically by *apples*

The word "apples," the referent of the word "apples" (apples), and the class (set, category) of things referred to as "apples"...

No problem so far (and "concept" is again redundant with "class").

"Betokens," I assume, means "refers to". But "evoking"? "intends"? "potentially evoked"? This is sounding cognitive and mental now, rather than just formal, so we need explanations, not just "evocations."

DP: "This is certainly adequate to pick out a natural kind by anyone that admits that humans employ instruments in thinking
 
There may be something that everything we call a "concept" has in common, but what (beyond "class") is it? And what does a cognizer "have" when he has such a "concept"?

DP: "the concept is not unknown at all. What is unknown is how to reduce it to a 'cerebral wherewithal',” 

Fine. I accept that: We know what "things" are too; but in knowing that, we don't know much. 

We are waiting to hear, from cognitive science, what it is that a cognizer has, in his brain, when he has a "concept."

DP: "People are quite aware that they think and that in thinking they use instruments of thought that signify what they are thinking about." 

Yes. But the problem is not whether or even what we feel we are doing when we are thinking; the problem is to explain how, i.e., explain what our brain is doing in order to make us capable of thinking, and being aware we are thinking, etc.

DP: "many times the thinking is about a problem which is not solved, and perhaps insoluble, so the thinking does not terminate in 'the capacity to do something or other.'"

When I think of an unsolved problem I can and do do a lot of things that I cannot and don't do if I am not thinking of an unsolved problem. (I can describe the problem, think aloud about possible solutions, etc.) It is just that one of the things I do not and cannot do when I think of an unsolved problem is to solve it (unless I do solve it, in which case I can and do do that too.)

In other words, thinking is doing (and potential doing) too, whether you do it aloud or in your head (or with your hands, manipulating chess-men...)

DP: "many conceptual processes do not terminate in any new capacity to act.

Perhaps not a new one. But if you were thinking at all, you were doing something, and can usually make some of that something explicit, in overt action.

DP: 'It is a category error for Pylyshyn to take a datum, as our having percepts is, and reject it as “nonexplanatory.

If Pylyshyn wants to explain how we can recognize, categorize, manipulate, name, describe, imagine, remember and think of apples, he can certainly say that being told that we have a "percept" or "image" of apples does not explain anything. It is not an explanans, but merely a restatement of the explanandum.

DP: "Concepts and percepts are not unobservable, but observed by all who experience them.  Since the mind is the knower as subject, subjective data  is definitive of it."

Best to leave the problem of explaining consciousness out of the problem of explaining concepts. See the philpapers thread on the "The Explanatory Gap".

Stevan Harnad

2010-02-22
Concepts: The Very Idea
Reply to Stevan Harnad

Edwards


First, let me respond to Jonathan Edwards. I certainly agree that insofar as words carry the weight of unconfirmed theories, we ought to avoid them. My point was that concepts properly so-called are data, and not a theory. Our main difference is the hypothesis of physical homunculi, which I consider a dead horse for number of reasons.:

1. As nearly as I can tell, and I am open to correction, your version of physical homunculi is nearly identical to Bernard Baars' Global Workspace hypothesis in requiring that conscious contents be brought together in a specific brain region.  Baars (2002), “The Conscious Access Hypothesis: Origins and Recent Evidence,” Trends in Cognitive Sciences 6:1, Table 1, p. 49, summarizes the observed correlates of conscious and unconscious response. Con­sciousness is correlated with high activation of object recognition and other specialized areas depending on the stimulus, with no single area common to all the studies.  To this my mind, this falsifies the hypothesis that contents are localized in the brain to attain conscious status. All that is left is a hope, rather like creationism, in the face of contrary data.

2. I don't think that Dennet's analysis sets up a straw homunculus. Rather, it is forced in him by his a priori rejection of intentional reality and his peculiar version of eliminativism. Knowing is a subject-object relationship in which there is invariably a known object and a knowing subject. If one removes either, we no longer have and act of knowing. Being aware is the subjective side of knowing, and our neurons and astrocytes encode its objective contents. No matter how we structure the transformation of contents by functional brain areas such as V1, the TPO junction, etc., we still have encoded contents going in and differently encoded contents coming out. The internal workings of the black box are irrelevant. Such transformations cannot generate the relationship to an intentional subsystem (the subject) required to make the intelligible contents encoded in the brain actually known. Rejecting intentional subsystems a priori, as Dennet does, forces one to posit a homunculus. Whatever its its structure, a physical homunculus can only represent the object side of the subject-object relationship. So one is forced into the regress with Dennet. The positing of any homunculus thus recognizes the necessity of a subjective subsystem without getting us any closer to explaining it.

3. The proposal with the greatest chance of working is that awareness is a kind of proprioception (J. J. C. Smart (2008), “The Identity Theory of Mind”, The Stanford Encyclo­pedia of Philosophy (Fall 2008 ed.) sect. 7. Metzinger (2003), Being No-One makes a similar proposal.) In fact, it is hard to see any alternative to proprioception, as it is the only way of gaining data on bodily states. I can give many reasons why this fails, but the most obvious is that proprioception provides data on a bodily state and not information on what that state signifies. Thus, proprioception may tell me that my arm is raised, but not that it is raised to signal a friend, to take an oath, or ask a question. If we had proprioception of brain states the result might be information on activation regions, cortical waves or neural firing rates, but it would not link that state data to the world state it encodes. The encoding is a relationship, and the putative proprioception only provides information on one relata (brain state) and not on the other (the state of the world so represented.) For this and many other reasons proprioception cannot model awareness.

Harnad


Turning to Steven Harnad's comments, our central difference is that he believes ideas need to be explanatory, while I see them as data -- or so it would seem, because later he affirms that "we were talking about concepts, and in particular, explaining them." If that is so, then concepts are data, and the more precisely we define the data, the better. Still, you seem ambivalent, as it is difficult to reconcile the view that concepts are to be explained with the admonition not to discuss them. It is absurd to refuse to allow people to use "falling" because falling does not explain anything, but is a datum to be explained. Similarly, "concept" has a well-defined technical meaning in classical logic, which I explained, and if cognitive science is to explain thinking, it ought minimally to account for the efficacy of classical logic. Its efficacy hinges on its applicability to the world, and that depends on the reference of concepts. If I cannot apply the concept {apple} to real apples, then thinking about apples can never lead to the "knowing how" central to your view of cognition.

SH: "Instrument? "thought"? "note"? "comprehension"? "judgment"? These all sound, at best, like uncashed pointings."

To define a theoretical structure, logic tells us we needs to employ undefined terms to avoid an infinite regress. Still, I can define many of these terms, but chose to avoid tedious explication. If there is some term that is causing confusion, I will be happy to either define it or to provide examples and discussion sufficient to make the term's meaning clear. In science one starts with pointing out the data, then constructs an explanatory theory. Thus, I make no apology for "uncashed pointings."

SH: "... (and 'concept' is again redundant with 'class').

"'Betokens,' I assume, means 'refers to'. But 'evoking'? 'intends'? 'potentially evoked'? This is sounding cognitive and mental now, rather than just formal, so we need explanations, not just 'evocations'."


Let me apologize for the text which got muddled without my noticing because I use less than and greater than signs to denote conceptual objects to distinguish them from the correlative linguistic expressions. Apparently, as I noted in my bug report, the system will not display text so marked. So, I will use {} to denote intentional objects ({apple} is a concept and {the apple is red} is a judgment) and "" to denote the expression of intentional objects ("apple" is a term and "the apple is red" is a proposition.) These are not to be confused because they do not signify in the same way. Intentional signification (formal signs) signify via binary relationships and language (instrumental signs) signify via ternary relations.

I would argue that 'class' and 'concept' are not convertible because the class of apples is constituted of all the actual apples, while the concept applies to any potential apples which may come to be. Further, classes are defined extensionally in an enumerative manner, while concepts are defined intensionally in terms of properties without the necessity of enumeration. The fact that the properties can be applied to determine what is to be added to the class does not make the class equivalent to its defining properties. For example, consider a universe in which all red objects are apples so that the class of red objects is the class of apples. That does not justify the identification of the concepts {red} and {apple}.

Of course it is "cognitive and mental." I make no apology for that. Current cognitive science has a fundamental methodological problem, viz., physics is a bad paradigm for it. In the physical sciences we study the objective, physical universe. Accordingly, our method projects all subjective data onto the null set (projects it out). As physicists, we do not care what is going in the observer's mind. However, in cognitive science we claim to be studying the mind in relation to the world, and so data on the subject per se is essential. Projecting out data on the subject qua subject effectively eliminates the very thing we claim to be studying, i.e the knowing subject. If we want to understand the knowing subject in relation with the known world, it is reasonable to investigate the physical mechanisms linking the subject to its objects, including sensory representation and processing, but that neither exhausts the data, nor provides a complete picture of cognition for it misses one side of the subject-object relation.

SH: "There may be something that everything we call a 'concept' has in common, but what (beyond 'class') is it? And what does a cognizer 'have' when he has such a 'concept'?"

Of course this is a deep problem related to the historical problem of universals, so no short answer can be complete. Some facts are clear: (1) Concepts involve intelligible information, and as Claude Shannon pointed out, information is the reduction of possibility. (2) As Aristotle pointed out in De Anima iii, the mere presence of information (his "phantasm") is not a concept. To have a concept, we have be aware of intelligible contents, changing it from intelligible to understood contents. (Aristotle called awareness "the agent intellect" because it makes intelligibility actually understood). (3) Awareness is an act, and so we have a representation of contents (in some brain state), plus awareness of the meaning of the representation (not awareness of the representation, which would be what proprioception would yield). Thus, a concept is not an object or data structure but a process applied to a representation. ({Apples} is just me thinking of apples.). (4) Experiential concepts have a dynamic connection with their originating referents. Apples acting (via various sensory modalities) on me give rise to the contents that I become aware of in thinking {apples}. Because real apples have the objective ability to activate the same (neural net) representation, they have an objective power to evoke the same {apple} concept. This is also what allows our thinking about apples to be applied to real apples -- because (ideally) real apples and only real real apples evoke the concept and so warrant the application of our {apple} thoughts. -- So what the cognizer has in a concept is awareness of potentially applicable contents.

SH: "We are waiting to hear, from cognitive science, what it is that a cognizer has, in his brain, when he has a 'concept.'"

Let me suggest a different conceptual space, one drawn form physics. The ontology of physics contains not only the measurable of space, time, mater and force fields, but also immaterial laws of nature. The laws of physics (descriptions) are not the laws of nature, only approximate them, which are discovered not invented. Nor are the laws properties, for properties are logical posterior to the objects instantiating them, while the laws are logically prior, explaining the existence and operation of observable objects and even the origin of the universe. The laws are specifically immaterial because it is a category error to ask what the laws are made of, how big they are or what their mass is.

Further, the laws are in the same genus (let's call it "logical propagators") as committed human intentions. Only the laws of nature and committed intentions allow us to propagate present information into future information. If the present state of a physical system is S1, then by the laws of nature (assuming no new factors) the future state will be S2. If I commit to doing x tomorrow, then we can predict (assuming no new factors), that I will do x tomorrow. It is not unreasonable, and not a violation of methodological naturalism, to assume that just as physics has both material and immaterial elements in its ontology, so ought cognitive science.

I would suggest that subjectivity, like the laws of nature, is specifically immaterial, and that the mind has two interacting subsystems: a neurophysiological subsystem responsible for data processing and motor control, and an intentional one responsible for awareness and supervision. The relation between these two subsystems is nomological (just as in physics). The intentional subsystem perturbs brain dynamics to effect direction, just as interactions in physics perturb the dynamics of non-interacting systems. In support of this we know from the placebo effect, and from brain scans showing different activation patterns before and after cognitive therapy of OCD, that intentionality can modify brain states. We also know (Krippner, et al. (1993), “Demonstration Research and Meta-Analysis in Para­psychol­ogy,” http://findarticles.com/p/articles/mi_m2320/ is_n3_v57/ai_15383545) that "The mean effect size for the experi­mental studies [of intentional control of random number generators] was small, 3.2 x 10 (to the -4th power), but sig­nif­icant­ly higher than the mean for the control studies (z = 4.1)." Thus, there is solid statistical evidence of such perturbations (the effect averaging 32 cases per 100,000). While these perturbations are small, the brain evolved as a control system, and control systems use small inputs to effect large outputs.

None of this is "supernatural" in the sense derided by metaphysical naturalists, because it is all based on accessible observations, analysis and concepts derived from physics.

SH: "When I think of an unsolved problem I can and do do a lot of things that I cannot and don't do if I am not thinking of an unsolved problem. (I can describe the problem, think aloud about possible solutions, etc.) It is just that one of the things I do not and cannot do when I think of an unsolved problem is to solve it (unless I do solve it, in which case I can and do do that too.)"

Of course you can and sometimes do. The question is not what you might or can do as a result of thinking. The question is what is essential to thinking, and the fact that you might or might not express yourself in certain ways shows that there is no essential connection between thinking and doing. Thinking is temporally and logical prior to its expression in word or act. So, they cannot enter into its definition.

Doing "in your head" is not third-person observable. What might be observable is some regional activation reflecting data processing, or pulse trains where electrodes have been placed, but not what you are thinking of, and not your thinking per se. This does not conform to a behaviorist of functionalist model of cognition, but does conform to the notion that we can observe thinking in the first person, which is the data missing in third person models of cognition. So, if we are going to agree that the data of first person origin is essential to complete the picture, we are closer to agreement.

SH: "Best to leave the problem of explaining consciousness out of the problem of explaining concepts."

This is the essence of our disagreement. The brain is full of contents which are processed without awareness. None of them is a concept. It is only because we are aware of certain contents that they become concepts.

DFP


2010-02-22
Concepts: The Very Idea
Reply to Dennis Polis

TELEKINETIC CONCEPTS

DP:  "'class' and 'concept' are not convertible because the class of apples is constituted of all the actual apples, while the concept applies to any potential apples which may come to be. Further, classes are defined extensionally in an enumerative manner, while concepts are defined intensionally in terms of properties without the necessity of enumeration."

"Concept" still means no more to me than "idea," but I have an idea of actual apples and I also have (another) idea of potential apples. 

Yes, classes (sets, categories) have both extensions and intensions. In the brain, the capacity to recognize all instances of members of a category as members of that category requires detectors of the features that distinguish instances of members from instances of nonmembers. Since instances vary and never exactly recur (in time and space, even if all other features are identical), it follows that all nontrivial categories (i.e., categories that are not just based on the rote memorization of individual instances) are "potential" in that any instance that has the requisite features is a member. (Enumeration rarely has anything to do with it: nontrivial categories have an infinity of potential members, but no one counts...)

DP:  "The fact that the properties can be applied to determine what is to be added to the class does not make the class equivalent to its defining properties." 

The brain does recognition of instances (from direct sensorimotor experience or verbal description/definition; it does not do ontology. The features used to identify category members are provisional, approximate, and mostly grounded in sensorimotor experience. (And words are almost all category names.)

Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.

Blondin-Massé, A, Chicoisne, G, Gargouri, Y, Harnad, S, Picard, O&Marcotte, O (2008). How Is Meaning Grounded in Dictionary Definitions? In TextGraphs-3 Workshop - 22nd International Conference on Computational Linguistics  

DP:  "Of course it is "cognitive and mental"... If we want to understand the knowing subject in relation with the known world, it is reasonable to investigate the physical mechanisms linking the subject to its objects, including sensory representation and processing, but that neither exhausts the data, nor provides a complete picture of cognition for it misses one side of the subject-object relation."

To explain organisms' cognitive function, we have to explain how (and why) the organism is able to do all that it is able to do, and, in addition, we have to explain how (and why) it feels. Best not to conflate the two, and to bracket the feeling until we can at least explain the doing. (This is why it's probably best not to run together the PhilPapers "concepts" thread with the "explanatory gap" thread.)

DP:  "To have a concept, we have be aware of intelligible contents, changing it from intelligible to understood contents." 

This, I think, is precisely the sort of conflation to avoid. Do the doable part first: explain the doings, and the doing capacity. Save the (probably undoable part, namely) trying to explain feeling till the doable part's done.

Harnad, S. and Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About FeelingArtificial Intelligence in Medicine 44(2): 83-89 http://

Harnad, S. (2007) From Knowing How To Knowing That: Acquiring Categories By Word of Mouth. Presented at Kaziemierz Naturalized Epistemology Workshop (KNEW), Kaziemierz, Poland, 2 September 2007. 

DP:  "Awareness is an act"

I think it is begging the question to say that awareness is an "act." Acts (things done by the organism or robot) are acts (and that includes any dynamical component of their physiology or biochemistry). But awareness (feeling) is very prominently not an act. It may be closely correlated with an act or its underlying dynamical state, it happens in real time, but to dub it an "act" misses precisely what makes feeling different, special and problematic (and why that all needs to be bracketed till the "doing" part is fully explained functionally).

DP:  "what the cognizer has in a concept is awareness of potentially applicable contents"

To know what an apple is and what "apple" means is to be able to recognize, manipulate, and describe apples (and descriptions of apples). Those are doings and doing capacities. That they are also accompanied by feelings is another matter, best deferred...

DP:  "The laws [of physics] are specifically immaterial because it is a category error to ask what the laws are made of... just as physics has both material and immaterial elements in its ontology, so ought cognitive science." 

I'm afraid this analogy does not help. Feelings are not laws, and cognitive science is just reverse bioengineering. Its "ontology" is just the ontology of physics, physiology, biochemistry and computation.

DP:  "subjectivity, like the laws of nature, is specifically immaterial, and that the mind has two interacting subsystems: a neurophysiological subsystem responsible for data processing and motor control, and an intentional one responsible for awareness and supervision." 

This would have been what one thought, based on our telekinetic intuitions, but telekinesis is false, and, on the face of it, feeling is real enough, but problematic, hence best set aside for now, till we first sort out our cognitive doing capacity -- in particular, what "having a 'concept'" empowers us to do, and what are brains have to "have" in order to be able to do that...

DP:  "we know from the placebo effect, and from brain scans showing different activation patterns before and after cognitive therapy of OCD, that intentionality can modify brain states. We also know (Krippner, et al. (1993), “Demonstration Research and Meta-Analysis in Para­psychol­ogy,”...) that "The mean effect size for the experi­mental studies [of intentional control of random number generators] was small...  but sig­nif­icant­ly higher than the mean for the control studies ..." Thus, there is solid statistical evidence of such perturbations... None of this is "supernatural" in the sense derided by metaphysical naturalists, because it is all based on accessible observations, analysis and concepts derived from physics."

It would appear that you take this as evidence that telekinesis is true after all: I don't think most people would agree that that is shown by this evidence. I certainly cannot.

DP:  "The question is not what you might or can do as a result of thinking... there is no essential connection between thinking and doing." 

I'm afraid I again cannot agree. I think thinking evolved in the service of doing, that the connection is indeed "essential," and that cognitive science's first task is to explain doing and doing-capacity, with thinking being presumably the internal dynamics and computation that generate the doing and the doing-capacity. ("Internal" in the sense of internal to the brain; I have bracketed the fact that the some of the thinking also happens to be felt, i.e. conscious, mental.)

DP:  "if we are going to agree that the data of first person origin is essential to complete the picture, we are closer to agreement."

Introspections about thinking are useful only if they give us (nontrivial) clues as to the functional mechanisms underlying thinking and doing capacity. (They sometimes do, but quite rarely, in my experience...) 

DP:  "The brain is full of contents which are processed without awareness. None of them is a concept. It is only because we are aware of certain contents that they become concepts.

I'm afraid I still don't know what concepts or ideas are, and this detour into the problem of conscious leaves me even more clueless. It just raises the obvious question: Why do "concepts" (whatever they are) need to be felt? Why can't they just be "had," so they can do their job (in recognizing, manipulating, describing whatever it is that they are concepts of).

But this does not strike me as something that is likely to shed more light on the concept of concept in cognitive science. I suggest redirecting any further discussion of concepts and consciousness to the "Explanatory Gap" thread and reserving this thread for the functional aspects of concepts.

Stevan Harnad


2010-02-24
Concepts: The Very Idea
Reply to Stevan Harnad
Stevan Harnad and I are reasonably close in our understanding of the relation of classes and concepts, providing that he understands that to be a potential apple is to potentially exist in such a way as to be able to evoke the concept {apple}. I have no problem with similar neural net activation in response to similar stimuli being implicated in classification. I say implicated for the setting sun may activate the complex representing oranges, giving rise to an association, but I can subsequently decide that even though the first thing that came to mind (was activated) was oranges, the setting sun is not an orange. Thus, neural net activation is supportive, but that does not allow us to conclude that it is decisive without some further a priori assumptions.

Where we are far, far apart is in our view of the centrality of awareness to knowing, concepts and the nature of the mind. One cannot claim to be studying mind or cognition while ignoring (or placing on indefinite hold) its one distinguishing characteristic, i.e. knowing. Stevan is confused in calling awareness "a feeling," and his idea of "act" cannot be endorsed. We also differ in that our openness to data in opposition to a priori commitments.

Let's start from the Stevan's fundamental assumptions and work our way up. While I fully endorse the science of evolution, ther is no reason to think evolution does more than explain stimulus-response patterns of behavior. As Alvin Plantinga (1994), "Naturalism Defeated" (http://www.calvin.edu/academic/philosophy/virtual_library/articles/plantinga_alvin/naturalism_defeated.pdf) has shown, false cognitions can lead to appropriate patterns of behavior, the basis of natural selection. For example, I can think the lion coming at me wants to be my friend, and the way to make friends with it, is to run as fast as possible. The result is selected behavior, but not veridical cognition. Since there is no evolutionary pressure to form veridical judgments, Stevan's claim that "... thinking evolved in the service of doing" has no logical or empirical support that I am aware of, and he has offered none. Rationally, what we can say evolved in the service of doing was data processing, which is entirely adequate to increase fitness and natural selection, without any need for the veridical reference characteristic of sound thought. (And, if we are incapable of sound thought, we might as well give up the philosophic enterprise.)

Stevan is surely entitled to decide what he wants to study, and he and his collegues may even decide "that cognitive science's first task is to explain doing and doing-capacity." Still, they are not rationally entitled to assume that "thinking [is] presumably the internal dynamics and computation that generate the doing and the doing-capacity," without further warrant, especially as much thinking does not, and is not intended to, terminate in doing. So, we have two points against his (very common) stance: (1) there is no reasonable argument that evolution can give rise to more than data processing in support of doing, and (2) much thinking is not in support of doing at all -- certainly not the kind of doing that would further survival. Many examples of "impractical" thought come to mind: metaphysical reflection, abstract mathematics, and mystical contemplation. The last even leads to ineffability outcomes with no possible verbal doing. All take time away from more practical pursuits that might lead to increased survival of offspring, and contemplation is correlated with chastity, which can hardly give it a selective advantage.

We all know what those who think "that cognitive science's first task is to explain doing and doing-capacity," are themselves doing: they are looking for their keys under the street light, not because there is any chance the keys are actually there, but because the light is better. Of course, there is nothing wrong with taking advantage of the light to find intersting things, but it is deceptive to call this looking for keys, or in this case, the nature of thought.

Before going further it is good to remind ourselves that the subject of this thread is concepts, and not the productive use of the tools of cognitive science. Those are very different topics, and we cannot allow the limitations of arbitrarily chosen tools to restrict our conclusions when other tools are more adequate to the topic are at hand. Concepts are being aware of contents, even if being aware cannot be found under the Stevan's street light.

I said that since the distinguishing activity of mind is knowing, which is essentially an subject-object relationship, we need to study both the objective and subjective side of knowing. In response, Stevan conflated knowing in general with knowing how, and went off on a tangent about feeling, which I did not mention. He never returned to subjectivity, or even bothered to address my central claim that knowing is a subject-object relationship. If how to study thought is as obvious as Stevan believes, it should be easy for him to say why my approach is ill-considered and misguided. So let me ask Stevan flat out, can there be knowing without a knowing subject and some known content or object? And, if there cannot be, how can we study knowing as opposed to biological data processing while ignoring subjectivity, i.e. being aware?

My approach does not exclude cognitive science, so it does not threaten his work. Rather, I am pointing out that cognitive science alone is inadequate to the essential act of mind: knowing.

Stevan calls awareness "a feeling." Surely that is both dismissive and ill-considered. In being aware of feelings, our feelings are the content of awareness, not awareness itself, which is standing as a subject in relation to our feelings as object. Being aware means standing as a subject, not particular contents, be they external objects or internal feelings. Having a knot in my stomach is a feeling, but being aware of a knot in my stomach is knowing I have that feeling, and not the feeling per se. I have no feeling, no awareness of body state, in being aware of Pythagoras' theorem. There is the theorem and me standing as a subject in relation to it. The associated body state may be some selective firing in my neural net, but I have no "feeling" assocated with that, and remain unaware of the firing. So, by what stretch of the definition of "feeling" is standing as a subject a feeling??

Stevan says:

I think it is begging the question to say that awareness is an "act." Acts (things done by the organism or robot) are acts (and that includes any dynamical component of their physiology or biochemistry). But awareness (feeling) is very prominently not an act. It may be closely correlated with an act or its underlying dynamical state, it happens in real time, but to dub it an "act" misses precisely what makes feeling different, special and problematic (and why that all needs to be bracketed till the "doing" part is fully explained functionally).

I am not sure what dictionary defines acts as acts. He offers no real definition, only examples and a claim that awareness is not an act without saying why, only that he finds it problematic. I can offer him sympathy, but not assent. My dictionary says that an act is doing a thing. Surely effecting a change is doing a thing, and making what was merely intelligible actually known by becoming aware of it is an important change. Saying awareness is an act is neither question begging nor problematic.

Stevan's claim that it "needs to be bracketed till the 'doing' part is fully explained functionally" is based on an a priori commitment to his peculiar epistemology, not on common usage or science. Have we bracketed the act of falling because we do not know the mechanics of gravity? (Newtons' and Einstein's theories offer only descriptions, not functional explanations.) If we were to follow his suggestion, we would never look at data until it was explained -- which would mean that it would never be explained. This is a prescription for blindness not progress.

He says:

To know what an apple is and what "apple" means is to be able to recognize, manipulate, and describe apples (and descriptions of apples). Those are doings and doing capacities.

Perhaps this might work for concrete objects like apples, but it will not suffice for abstract mathematical constructions which cannot be manipulated. Some can hardly be described. Further, it is false unless you take a behaviorist stance in which recognition is an appropriate behavioral response to an apple. It only works if recognizing entails being aware. A machine can respond to a presented apple by outputting "apple," but that would not fool most people into thinking it knew what an apple is. More fundamentally, however you choose to define "to know," it will not be what people do without including awareness. So the question is are we studying what happens nature, or embarking on some abstract construction which ignores the data of human nature?

Laws are not feelings, but then neither is being a subject a feeling. Subjects may have feelings, but they are not feelings. It is a category error to suggest that they are. The laws studied by physics control the motions of physical systems. Subjects control the motions of physical systems. It we define laws by their ability to control motions, a subject is a special kind of law. The idea that laws are universal to the extent of excluding perturbations by intending subjects has no empirical basis, but is the result of hubris. Wanting to know universal laws, and finding that the laws we know have a wide range of application, does not mean that we know universal laws. The idea that subjects can perturb physical systems systems intentionally, while it causes Stevan heart burn, has strong, repeatable experimental support.

This brings me to openness to data as opposed to a priori commitments. I prefer respecting data to a priori commitments. I cited a little of the data supporting my position: first, the placebo effect, second brain scans (surely Stevan believes in them), and lastly direct data on telekinesis (Krippner, et al. (1993), “Demonstration Research and Meta-Analysis in Para­psychol­ogy") which shows an effect of 32 events per 100,000 with z=4.1. In return I got only a faith statement that would make any Creationist proud. Without considering the data, Stevan rejects it because "most people" do. That is not the scientific method I learned. I hope that he is not suggesting I become a creationist because most people are. I learned that when results we don't like are independently replicated many times, we lay down our old commitments and theories and accept reality. What am I missing? Oh, we need to be selective in applying our methods so as not to contradict our a priori commitments? Sorry! How many angels are dancing on that pin again?

In sum, Stevan has a number of positions unsupported by data or logic.
  • The idea that evolution can yield correct thinking as opposed to data processing outputting fit behavior -- vs.Plantinga's examples of non-veridical thinking yielding selected behavior.
  • The idea that all thinking terminates in action or is intended to terminate in action -- falsified by theoretical reflection and contemplation.
  • The idea that awareness is a feeling --  vs. feelings being on the object (content) side of the subject-object relation, while awareness is the subject side.
  • The idea that we should ignore data until we can make it fit our a priori model -- e.g. all acts of awareness, any
     data showing that intentionality can result in physical changes, and any other data falsifying Stevan's model.
He also seems unwilling to admit that there can be no knowing without a knowing subject.

Dennis Polis

2010-02-24
Concepts: The Very Idea
Reply to Dennis Polis

SCIENCE OF THE ANOMALOUS OR SEARCH FOR THE SOUL?

DP: "One cannot claim to be studying mind or cognition while ignoring... its one distinguishing characteristic, i.e. knowing." 

Unfelt knowing is fine; the problem comes with felt knowing. (But this is not a topic for this concepts-thread but for the explanatory-gap thread. I will reply briefly here, this last time, but please send any further postings on the subject of consciousness to the explanatory-gap thread as I do not wish to divert the discussion on modelling concepts toward the problem of consciousness.)

DP: "Stevan is confused in calling awareness "a feeling"

Look closely at all your examples of states of awareness and you will notice that they are all felt states (and vice versa). (Please redirect follow-ups on explanatory-gap thread.)

DP: "his idea of "act" cannot be endorsed." 

Later Dennis writes "My dictionary says that an act is doing a thing." I'm happy to endorse that!

DP: "While I fully endorse the science of evolution...  Stevan's claim that "... thinking evolved in the service of doing... has no logical or empirical support..."

Doing and capability of doing (i.e., performance and performance capacity, performance potential). I think that's precisely what evolution explains -- and it has virtually nothing but empirical support!

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301. Reprinted in: C.G. Langton (Ed.). Artificial Life: An Overview. MIT Press 1995. 
Harnad, S. (2002) Darwin, Skinner, Turing and the Mind. (Inaugural Address. Hungarian Academy of Science.) Magyar Pszichologiai Szemle LVII (4) 521-528. 
Harnad, S. (2002) Turing Indistinguishability and the Blind Watchmaker. In: J. Fetzer (ed.) Evolving Consciousness Amsterdam: John Benjamins. Pp. 3-18.  
Harnad, S. (2009) On Fodor on Darwin on Evolution. Technical Report. Electronics and Computer Science, University of Southampton.

DP: "what we can say evolved in the service of doing was data processing, which is entirely adequate to increase fitness and natural selection, without any need for the veridical reference characteristic of sound thought."

I'm not quite sure what "sound" thought means (rigorous mathematical proof?), but if the underlying question is again about why thought ("data-processing")  is felt rather than just functed feelinglessly, please redirect this question to explanatory-gap thread.

DP: "much thinking is not in support of doing at all... Many examples of "impractical" thought come to mind..."

Doing and capability of doing (i.e., performance and performance capacity, performance potential), including planning capacity.

DP: "can there be knowing without a knowing subject and some known content or object? And, if there cannot be, how can we study knowing as opposed to biological data processing while ignoring subjectivity, i.e. being aware?"

There can be and is knowing, mostly felt knowing, hence there is a feeler. But how and why knowing is felt, rather than just "functed", is a question that should be redirected to the explanatory-gap thread.

DP: "Stevan calls awareness "a feeling."Surely that is both dismissive and ill-considered."

No, it's accepting, rather thoroughly considered, and calling a spade a spade -- but not a question for the concepts-thread.  (Please redirect follow-ups on this to explanatory-gap thread.)

DP: "My dictionary says that an act is doing a thing. Surely effecting a change is doing a thing, and making what was merely intelligible actually known by becoming aware of it is an important change. Saying awareness is an act is neither question begging nor problematic."

 (Please redirect this question -- a variant on the earlier ones above -- to the explanatory-gap thread.)

DP: "A machine can respond to a presented apple by outputting "apple,"but that would not fool most people into thinking it knew what an apple is."

But maybe something that's closer to having the full performance capacity of the machine that we actually are -- such as a lifelong Turing-Test-Passing Robot -- would not be "fooling" us...

DP: "however you choose to define "to know," it will not be what people do without including awareness." 

(Please redirect follow-ups to explanatory-gap thread.)

DP: "that laws are universal to the extent of excluding perturbations by intending subjects has no empirical basis... that subjects can perturb physical systems systems intentionally... has strong, repeatable experimental support... I cited a little of the data supporting my position. Stevan rejects it because 'most people' do."

James E. Alcock (2003) Give the Null Hypothesis a Chance: Reasons to Remain Doubtful about the Existence of Psi. Journal of Consciousness Studies 10.
James E. Alcock (1987). Parapsychology: Science of the anomalous or search for the soul? Behavioral and Brain Sciences 10:553–643

Stevan Harnad


2010-02-26
Concepts: The Very Idea
Reply to Stevan Harnad
Stevan calls it the search for the soul. I call it focusing on the essential characteristic of knowing, being a subject-object relation. Stevan calls it a distraction from modeling, I call it enumerating the elements a model must replicate to be a model of the mind as found in nature. I have no fundamental problem with Stevan's approach to modeling complex stimulus-response relationships using a third-person approach. My problem is in calling the result a model of the mind. It is not.

Stevan's work, while worthy, deals with simulating input-output sequences, and has nothing to do with the supposed topic of the thread: concepts, which are instruments of thought (formal signs) and a differ from machine or neural representations (instrumental signs) -- see http://xianphil.org/semiotics.html. I am happy to agree with him that evolution supports the development of complex data processing capabilities, but not with the notion that data processing in the absence of awareness (which is not a feeling), constitutes thought. Turing was quite clear that his test was a game providing no insight into the nature of consciousness. I have played that game and it is fun and intriguing, but I never deceived myself that I was modeling thought.

Sound, of course, means true and valid. If evolution cannot lead to the development of sound, veridical thought, it is not an adequate explanation of mind.

Alcock is fighting a rear guard delaying action given the statistical improbability (1:24,000) of the null hypothesis, but I need not rely on parapsychology to make my case. Telekinesis is merely a general phenomenon in its pure form. The placebo effect and brain scans showing intentional control of regional activation make the same point, as does raising my arm at will.

I will take the discussion to the explanatory gap thread, but I want to end by saying that no model of a natural mind is adequate which leaves out the subject in the relation of knowing.

Dennis Polis